Tensorflow NN architecture incompatible shape - python

I am trying to create a Multitask NN using Tensorflow. Following is the architecture that I am trying to develop:
METRICS= [tf.keras.metrics.TruePositives(name='TP'),
tf.keras.metrics.FalsePositives(name='FP'),
tf.keras.metrics.TrueNegatives(name='TN'),
tf.keras.metrics.FalseNegatives(name='FN'),
tf.keras.metrics.Precision(name='precision'),
tf.keras.metrics.Recall(name='recall'),
tf.keras.metrics.AUC(curve='PR', name='PR-AUC')]
input_shape = (X_train.shape[1],)
inputlayer = tf.keras.layers.Input(shape=input_shape)
l1 = tf.keras.layers.Dense(input_shape[0]*2, activation= 'relu')(inputlayer)
l2 = tf.keras.layers.Dropout(0.1)(l1)
l3 = tf.keras.layers.Dense(int(input_shape[0]/2), activation='relu')(l2)
output1 = tf.keras.layers.Dense(1, activation='sigmoid', name = 'output1')(l3)
output2 = tf.keras.layers.Dense(10, activation='softmax', name = 'output2')(l3)
output3 = tf.keras.layers.Dense(12, activation='softmax', name = 'output3')(l3)
model = tf.keras.Model(inputs=inputlayer, outputs=[output1, output2, output3])
model.compile(loss={"output1": 'binary_crossentropy',
"output2": 'categorical_crossentropy',
"output3": 'categorical_crossentropy'},
optimizer=tf.keras.optimizers.Adam(learning_rate=.01),
metrics = METRICS, loss_weights = [1, 1e-1, 1e-1])
And this is the model architecture:
Then I tried to train the model like this:
BATCH_SIZE= 20
model.fit(X_train, [y1_train,y2_train,y3_train], batch_size=BATCH_SIZE, epochs=10, verbose=0)
But I got the following issue:
ValueError: Shapes (None, 1) and (None, 10) are incompatible
I already verified the labels of each output and they are respectively 2, 10 and 12
I couldn't understood what the problem is exactly, can anyone give me a suggestion please?

I think you might have mixed up the order of your labels. Here is a working example:
import tensorflow as tf
METRICS= [tf.keras.metrics.TruePositives(name='TP'),
tf.keras.metrics.FalsePositives(name='FP'),
tf.keras.metrics.TrueNegatives(name='TN'),
tf.keras.metrics.FalseNegatives(name='FN'),
tf.keras.metrics.Precision(name='precision'),
tf.keras.metrics.Recall(name='recall'),
tf.keras.metrics.AUC(curve='PR', name='PR-AUC')]
input_shape = (31,)
inputlayer = tf.keras.layers.Input(shape=input_shape)
l1 = tf.keras.layers.Dense(input_shape[0]*2, activation= 'relu')(inputlayer)
l2 = tf.keras.layers.Dropout(0.1)(l1)
l3 = tf.keras.layers.Dense(int(input_shape[0]/2), activation='relu')(l2)
output1 = tf.keras.layers.Dense(1, activation='sigmoid', name = 'output1')(l3)
output2 = tf.keras.layers.Dense(10, activation='softmax', name = 'output2')(l3)
output3 = tf.keras.layers.Dense(12, activation='softmax', name = 'output3')(l3)
model = tf.keras.Model(inputs=inputlayer, outputs=[output1, output2, output3])
model.compile(loss={"output1": 'binary_crossentropy',
"output2": 'categorical_crossentropy',
"output3": 'categorical_crossentropy'},
optimizer=tf.keras.optimizers.Adam(learning_rate=.01),
metrics = METRICS, loss_weights = [1, 1e-1, 1e-1])
y1_train, y2_train, y3_train = tf.random.uniform((50, 1), maxval=2), tf.random.uniform((50, 10), maxval=11), tf.random.uniform((50, 12), maxval=13)
model.fit(tf.random.normal((50, 31)), [y1_train,y2_train,y3_train], batch_size=20, epochs=10)
You need to make sure that y1_train, y2_train, and y3_train are in the correct order and have the correct shape, that is (samples, 1), (samples, 10), and (samples, 12).

Related

ValueError: Input 0 of layer sequential_4 is incompatible with the layer: : expected min_ndim=3, found ndim=2. Full shape received: (None, 5000)

I am working on a project where I need to segregate the sleep data and its labels.
But I am stuck with the error mentioned above.
As I am new to the machine learning side I would be highly grateful if someone can help me out with how can I resolve this issue.
I have implemented a model using the following code:
EEG_training_data = EEG_training_data.reshape(EEG_training_data.shape[0], EEG_training_data.shape[1],1)
print(EEG_training_data.shape)# (5360, 5000, 1)
EEG_validation_data = EEG_validation_data.reshape(EEG_validation_data.shape[0], EEG_validation_data.shape[1],1)
print(EEG_validation_data.shape)#(1396, 5000, 1)
label_class = (np.unique(EEG_training_label))
num_classes = label_class.size # num_classes = 5
#define the model using CNN
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv1D(filters=64, kernel_size= 16, activation='relu', batch_input_shape=(None,5000, 1))) # #input_shape=(5000, 1)
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.MaxPool1D(8, padding='same'))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(16, activation='relu'))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dropout(0.2))
model.add(tf.keras.layers.Dense(num_classes, activation='softmax'))
#Summary of the model defined:
model.summary()
#Define loss function
model.compile(
loss= 'categorical_crossentropy', # 'sparse_categorical_crossentropy',
optimizer='adam',
metrics=[tf.keras.metrics.FalseNegatives(), tf.keras.metrics.FalsePositives(), 'accuracy'])
#one Hot Encoding
y_train_hot = tf.keras.utils.to_categorical(EEG_training_label, num_classes)
print('New y_train shape: ', y_train_hot.shape)#(5360, 5)
y_valid_hot = tf.keras.utils.to_categorical(EEG_validation_label, num_classes)
print('New y_valid shape: ', y_valid_hot.shape)#(1396, 5)
# apply fit on data
model_history = model.fit(
x=EEG_training_data,
y=y_train_hot,
batch_size=32,
epochs=5,
validation_data=(EEG_validation_data, y_valid_hot),
)
model_prediction = model.predict(EEG_testing_data)
predicted_matrix = tf.math.confusion_matrix(labels=EEG_testing_label.argmax(axis=1), predictions=model_prediction.argmax(axis=1)).numpy()
print(predicted_matrix)
I'm not experiencing your issue with the code you have provided. Try executing the following code, that should work as expected. If that is the case, double check that the shapes of all your data, i.e. EEG_training_data etc. are like the ones below:
import tensorflow as tf
import numpy as np
EEG_training_data = np.ones((5360, 5000, 1))
EEG_validation_data = np.ones((1396, 5000, 1))
EEG_training_label = np.random.randint(5, size=5360)
EEG_validation_label = np.random.randint(5, size=1396)
label_class = (np.unique(EEG_training_label))
num_classes = label_class.size
print(num_classes) # prints 5
#define the model using CNN
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv1D(filters=64, kernel_size= 16, activation='relu', batch_input_shape=(None, 5000, 1))) # input_shape=(5000, 1)
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.MaxPool1D(8, padding='same'))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(16, activation='relu'))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dropout(0.2))
model.add(tf.keras.layers.Dense(num_classes, activation='softmax'))
#Summary of the model defined:
model.summary()
#Define loss function
model.compile(loss = 'categorical_crossentropy',
optimizer='adam',
metrics=[tf.keras.metrics.FalseNegatives(), tf.keras.metrics.FalsePositives(), 'accuracy'])
#one Hot Encoding
y_train_hot = tf.keras.utils.to_categorical(EEG_training_label, num_classes)
print('New y_train shape: ', y_train_hot.shape) #(5360, 5)
y_valid_hot = tf.keras.utils.to_categorical(EEG_validation_label, num_classes)
print('New y_valid shape: ', y_valid_hot.shape) #(1396, 5)
# apply fit on data
model_history = model.fit(x=EEG_training_data,
y=y_train_hot,
batch_size=32,
epochs=5,
validation_data=(EEG_validation_data, y_valid_hot),
)
model_prediction = model.predict(EEG_validation_data)
predicted_matrix = tf.math.confusion_matrix(labels=EEG_validation_label.argmax(axis=1), predictions=model_prediction.argmax(axis=1)).numpy()
print(predicted_matrix)

CNN with keras giving the Graph disconnected error

I am experimenting with the use of multi-input to a double CNN.
However, I am getting the Graph disconnected: cannot obtain value for tensor Tensor("embedding_1_input:0", shape=(?, 40), dtype=float32) at layer "embedding_1_input". The following previous layers were accessed without issue: []
I didn't find a fix for it, knowing that there are layers with shape = (?, 50) why is there such dimension "?" ????
from keras.models import Model
from keras.layers import Input, Dot
from keras.layers.core import Reshape
from keras.backend import int_shape
max_words=50
thedata = [' '.join(list(x)) for x in self.input_data]
emb_size = 15
tokenizer = Tokenizer(num_words=max_words)
tokenizer.fit_on_texts(thedata)
self.dictionary = tokenizer.word_index
vocab_size = len(tokenizer.word_index) + 1
vocab_size = 5000
allWordIndices = []
for text in thedata:
wordIndices = self.convert_text_to_index_array(text)
allWordIndices.append(wordIndices)
allWordIndices = np.asarray(allWordIndices)
train_x = tokenizer.sequences_to_matrix(allWordIndices, mode='binary')
train_y = list(map(lambda x: self.c[x], self.output_data))
X_train, X_test, Y_train, Y_test = train_test_split(train_x, train_y, test_size = 0.1, shuffle=True)
vocab_size1 = 50000
input_size = 40
input_shape = (input_size, 1)
# first model
L_branch = Sequential()
L_branch.add(Embedding(vocab_size1, emb_size, input_length=max_words, trainable=True))
L_branch.add(Conv1D(50, activation='relu', kernel_size=10, input_shape=input_shape))
L_branch.add(MaxPooling1D(emb_size))
L_branch.add(Flatten())
L_branch.add(Dense(emb_size, activation='sigmoid'))
# second model
R_branch = Sequential()
R_branch.add(Embedding(vocab_size, output_dim=emb_size, input_length=max_words, trainable=True))
R_branch.add(Dense(emb_size, activation='sigmoid'))
input_in1 = Input(shape=(input_size, 1, 1))
input_in2 = Input(shape=(input_size, 1, 1))
merged = Dot(axes=(1, 2))([L_branch.outputs[0], R_branch.outputs[0]])
print(merged.shape)
#
merged = Reshape((-1, int_shape(merged)[1],))(merged)
latent = Conv1D(50, activation='relu', kernel_size=self.nb_classes, input_shape=(1, input_size, 1))(merged)
latent = MaxPooling1D(self.nb_classes)(latent)
out = Dense(self.nb_classes, activation='sigmoid')(latent)
final_model = Model([input_in2, input_in1], out)
final_model.compile(
loss='mse',
optimizer='adam',
metrics=['accuracy'])
final_model.summary()
final_model.fit(
[X_train, train_x],
Y_train,
batch_size=200,
epochs=5,
verbose=1
# validation_split=0.1
)
I tried adding flatten layer after the second embedding but I got an error. Even with a dropout.
I also tried eliminating the maxpooling from the L_branch model.
Tried not working with the reshape and just expand the input dims since I was getting an error of the second Conv1D saying the layer expects dims = 3 but it was getting dims = 2.
latent = Conv1D(50, activation='relu', kernel_size=nb_classes, input_shape=(1, input_size, 1))(merged)
File "/root/anaconda3/envs/oea/lib/python3.7/site-packages/keras/engine/base_layer.py", line 414, in __call__
self.assert_input_compatibility(inputs)
File "/root/anaconda3/envs/oea/lib/python3.7/site-packages/keras/engine/base_layer.py", line 311, in assert_input_compatibility
str(K.ndim(x)))
ValueError: Input 0 is incompatible with layer conv1d_2: expected ndim=3, found ndim=2`
I also didn't get which input is used in the first model and which input is used in the second?
you have to define your new model in this way: final_model = Model([L_branch.input, R_branch.input], out)
I provide an example of network with a structure similar to yours
vocab_size1 = 5000
vocab_size2 = 50000
input_size1 = 40
input_size2 = 40
max_words=50
emb_size = 15
nb_classes = 10
# first model
L_branch = Sequential()
L_branch.add(Embedding(vocab_size1, emb_size, input_length=input_size1, trainable=True))
L_branch.add(Conv1D(50, activation='relu', kernel_size=10))
L_branch.add(MaxPooling1D())
L_branch.add(Flatten())
L_branch.add(Dense(emb_size, activation='relu'))
# second model
R_branch = Sequential()
R_branch.add(Embedding(vocab_size2, emb_size, input_length=input_size2, trainable=True))
R_branch.add(Flatten())
R_branch.add(Dense(emb_size, activation='relu'))
merged = Concatenate()([L_branch.output, R_branch.output])
latent = Dense(50, activation='relu')(merged)
out = Dense(nb_classes, activation='softmax')(latent)
final_model = Model([L_branch.input, R_branch.input], out)
final_model.compile(
loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
final_model.summary()
X1 = np.random.randint(0,vocab_size1, (100,input_size1))
X2 = np.random.randint(0,vocab_size2, (100,input_size2))
y = np.random.randint(0,nb_classes, 100)
final_model.fit([X1,X2], y, epochs=10)

Unknown entries in loss dictionary: Only expected following keys ['conv1d_4', 'conv1d_4']

I have a model with multiple outputs. I want to assign different labels to each of the loss functions and metrics. The code is as given below:
input_img = Input(shape=(n_states,n_features))
x = Conv1D(32, kernel_size=5, activation='relu', padding='same')(input_img)
x = Conv1D(32, kernel_size=5, activation='relu', padding='same')(x)
x = Conv1D(32, kernel_size=5, activation='relu', padding='same')(x)
decoded = Conv1D(n_outputs, kernel_size=3, activation='linear', padding='same')(x)
model = Model(inputs=input_img, outputs=[decoded,decoded])
model.compile(loss={'regression': 'mean_squared_error',
'diffusion': 'mean_absolute_error'},
loss_weights={'regression': 1.0,
'diffusion': 0.5},
optimizer='adam',
metrics={'regression': coeff_determination,
'diffusion': coeff_determination})
model.summary()
history_callback = model.fit(x_train,
{'regression': y_train, 'diffusion': y_train},
batch_size=batch_size,
epochs=epochs,
validation_data= (x_valid, {'regression': y_valid, 'diffusion': y_valid}),
verbose=1)
If I run the above model, I get an error of unknown entries in the loss dictionary. Specifically, the error isUnknown entries in loss dictionary: ['diffusion', 'regression']. Only expected following keys: ['conv1d_4', 'conv1d_4'].
How can I give different names to each of the loss function? Thank you.
You need to match the names of your outputs with the loss dictionary keys. Here, you didn't name your outputs, so they default to conv1d_4 in the name space. Try:
decoded1 = Conv1D(n_outputs, kernel_size=3, activation='linear',
padding='same', name='diffusion')(x)
decoded2 = Conv1D(n_outputs, kernel_size=3, activation='linear',
padding='same', name='regression')(x)
I doubled your output because I don't think you can apply two different losses to the same output.
Here's a minimal example of matching output/loss dictionary keys:
from tensorflow.keras import Input, Model
from tensorflow.keras.layers import Dense
import numpy as np
x_train = np.random.rand(1000, 10)
y_train = np.random.rand(1000)
inputs = Input(shape=(10,))
x = Dense(32, activation='relu')(inputs)
out1 = Dense(1, name='first_output')(x)
out2 = Dense(1, name='second_output')(x)
model = Model(inputs=inputs, outputs=[out1, out2])
model.compile(loss={'first_output': 'mean_squared_error',
'second_output': 'mean_absolute_error'},
optimizer='adam')
history_callback = model.fit(x_train,
{'first_output': y_train, 'second_output': y_train},
batch_size=8, epochs=1)
Notice that loss dictionary keys match the output keys. The same should be done with metrics, loss weights, validation data, etc.

TypeError: Input 'y' of 'Equal' Op has type float32 that does not match type int32 of argument 'x'

I'm pretty new to Keras and LSTMs. I've been trying to train my model of sequences to predict the future price of a stock with the code below but the error above kept popping up.
I have tried changing the dtypes of both x_data, y_data with .astype(np.float16). However, all times I am returned with the TypeError stating that I have a float32 type.
If it helps, here are the shapes of my data:
xtrain.shape : (32, 24, 67), ytrain.shape : (32, 24, 1), xtest.shape
: (38, 67), ytest.shape : (38, 1)
Does anyone have any idea on what might be wrong? I've been stuck at this for awhile. It would be great if someone could give me a hint.
y_data = y_data.to_numpy().astype(np.float32)
x_data = main_df.to_numpy().astype(np.float32)
num_x_signals = x_data.shape[1]
num_y_signals = y_data.shape[1]
# SPLIT TRAIN TEST DATA
ratio = 0.85
train_ratio = int(ratio * len(x_data))
x_train = x_data[0:train_ratio]
x_test = x_data[train_ratio:]
y_train = y_data[0:train_ratio]
y_test = y_data[train_ratio:]
# GENERATE RANDOM SEQUENCES
batch_size = 32
sequence_length = 24
EPOCHS = 50
def batch_generator(x_train, y_train, batch_size, sequence_length, num_x_signals, num_y_signals, num_train):
while True:
x_shape = (batch_size, sequence_length, num_x_signals)
x_batch = np.zeros(shape = x_shape).astype(np.float32)
y_shape = (batch_size, sequence_length, num_y_signals)
y_batch = np.zeros(shape = y_shape).astype(np.float32)
for i in range(batch_size):
idx = np.random.randint(num_train - sequence_length)
x_batch[i] = x_train[idx:idx+sequence_length]
y_batch[i] = y_train[idx:idx+sequence_length]
yield (x_batch, y_batch)
generator = batch_generator(x_train, y_train, batch_size, sequence_length, num_x_signals, num_y_signals, train_ratio)
xtrain, ytrain = next(generator)
xtest, ytest = (np.expand_dims(x_test, axis=0),
np.expand_dims(y_test, axis=0))
# LSTM MODEL
model = Sequential()
model.add(LSTM(32, input_shape = (None, num_x_signals,), return_sequences = True))
model.add(Dropout(0.2))
model.add(BatchNormalization())
model.add(LSTM(128, return_sequences = True))
model.add(Dropout(0.15))
model.add(BatchNormalization())
model.add(LSTM(128))
model.add(Dropout(0.18))
model.add(BatchNormalization())
model.add(Dense(32, activation = 'relu'))
model.add(Dropout(0.2))
model.add(Dense(1, activation = 'softmax'))
opt = tf.keras.optimizers.Adam(lr = 0.001, decay = 1e-6)
model.compile(
loss = 'sparse_categorical_crossentropy',
optimizer = opt,
metrics = ['accuracy']
)
name_of_file = f"{to_predict}-{sequence_length}-{future_predict}-{int(time.time())}"
tensorboard = TensorBoard(log_dir = "logs/{}".format(name_of_file))
filepath = "LSTM_Final-{epoch:02d}-{val_acc:.3f}"
checkpoint = ModelCheckpoint("models/{}.model".format(filepath, monitor = 'val_acc', verbose = 1, save_best_only = True, mode = 'max')) # saves only the best ones
history = model.fit(
xtrain, ytrain,
epochs = EPOCHS,
validation_data = (xtest, ytest),
callbacks = [tensorboard, checkpoint]
)
score = model.evaluate(xtest, ytest, verbose = 0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
model.save("models/{}".format(name_of_file))
I found this issue had to do with the loss function specified.
My code:
import tensorflow as tf
from tensorflow import keras
model = tf.keras.Sequential([
keras.layers.Dense(64, activation=tf.nn.relu, input_shape=[3]),
keras.layers.Dense(64, activation=tf.nn.relu),
keras.layers.Dense(1)
])
#I changed the loss function from 'sparse_categorical_crossentropy' to 'mean_squared error'
model.compile(optimizer='adam',loss='mean_squared_error',metrics=['accuracy'])
X = train_dataset.to_numpy()
y = train_labels.to_numpy()
model.fit(X,y, epochs=5)
X shape was (920,3) and dtype = float64
y shape was (920,1) and dtype = float64
My problem was in the model.fit method. I took the 'sparse_categorical_crossentropy' function from an image recognition example and what I was trying here is a neural network for house prices prediction.

Getting vector obtained in the last layer of CNN before softmax layer

I am trying to implement a system by encoding inputs using CNN. After CNN, I need to get a vector and use it in another deep learning method.
def get_input_representation(self):
# get word vectors from embedding
inputs = tf.nn.embedding_lookup(self.embeddings, self.input_placeholder)
sequence_length = inputs.shape[1] # 56
vocabulary_size = 160 # 18765
embedding_dim = 256
filter_sizes = [3,4,5]
num_filters = 3
drop = 0.5
epochs = 10
batch_size = 30
# this returns a tensor
print("Creating Model...")
inputs = Input(shape=(sequence_length,), dtype='int32')
embedding = Embedding(input_dim=vocabulary_size, output_dim=embedding_dim, input_length=sequence_length)(inputs)
reshape = Reshape((sequence_length,embedding_dim,1))(embedding)
conv_0 = Conv2D(num_filters, kernel_size=(filter_sizes[0], embedding_dim), padding='valid', kernel_initializer='normal', activation='relu')(reshape)
conv_1 = Conv2D(num_filters, kernel_size=(filter_sizes[1], embedding_dim), padding='valid', kernel_initializer='normal', activation='relu')(reshape)
conv_2 = Conv2D(num_filters, kernel_size=(filter_sizes[2], embedding_dim), padding='valid', kernel_initializer='normal', activation='relu')(reshape)
maxpool_0 = MaxPool2D(pool_size=(sequence_length - filter_sizes[0] + 1, 1), strides=(1,1), padding='valid')(conv_0)
maxpool_1 = MaxPool2D(pool_size=(sequence_length - filter_sizes[1] + 1, 1), strides=(1,1), padding='valid')(conv_1)
maxpool_2 = MaxPool2D(pool_size=(sequence_length - filter_sizes[2] + 1, 1), strides=(1,1), padding='valid')(conv_2)
concatenated_tensor = Concatenate(axis=1)([maxpool_0, maxpool_1, maxpool_2])
flatten = Flatten()(concatenated_tensor)
dropout = Dropout(drop)(flatten)
output = Dense(units=2, activation='softmax')(dropout)
model = Model(inputs=inputs, outputs=output)
adam = Adam(lr=1e-4, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
model.compile(optimizer=adam, loss='binary_crossentropy', metrics=['accuracy'])
adam = Adam(lr=1e-4, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
model.compile(optimizer=adam, loss='binary_crossentropy', metrics=['accuracy'])
print("Traning Model...")
model.fit(X_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, callbacks=[checkpoint], validation_data=(X_test, y_test)) # starts training
return ??
The above code, trains the model using X_train and Y_train and then tests it. However in my system I do not have Y_train or Y_test, I only need the vector in the last hidden layer before softmax layer. How can I obtain it?
For that you can define a backend function to get the output of arbitrary layer(s):
from keras import backend as K
func = K.function([model.input], [model.layers[index_of_layer].output])
You can find the index of your desired layer using model.summary() where the layers are listed starting from index zero. If you need the layer before the last layer you can use -2 as the index (i.e. .layers attribute is actually a list so you can index it like a list in python). Then you can use the function you have defined by passing a list of input array(s):
outputs = func(inputs)
Alternatively, you can also define a model for this purpose. This has been covered in Keras documentation more thoroughly so I advise you to read that.

Categories

Resources