I use GloVe embeddings to convert texts into vectors to predict a binary sentiment. I want to consider a dummy variable in my NN as well (published in winter=0, summer=1).
I read some sources on multiple inputs but I get
ValueError: Unexpectedly found an instance of type `<class 'keras.layers.merge.Concatenate'>`. Expected a symbolic tensor instance. .... Layer output was called with an input that isn't a symbolic tensor. Received type: <class 'keras.layers.merge.Concatenate'>. Full input: [<keras.layers.merge.Concatenate object at 0x7f2b9b677d68>]
My network looks like this:
sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedded_sequences = embedding_layer(sequence_input)
dummy= Input(shape=(1,), dtype='int32', name='dummy')
x = Conv1D(100, 20, activation='relu')(embedded_sequences) # filter= 100, kernel=20
x = MaxPooling1D(5)(x) # reduces output to 1/5 of original data by taking only max values
x = Conv1D(100, 20, activation='relu')(x)
x = GlobalAveragePooling1D()(x) # global max pooling
x = Dropout(0.5)(x)
x = Dense(100, activation='relu')(x)
combined = Concatenate([x, dummy])
preds = Dense(1, activation='sigmoid',name='output')(combined)
model = Model(inputs=[sequence_input,dummy], outputs=[preds])
print(model.summary())
I feel Im missing something essential but cant figure out what..
text + dummy --> binary_prediction
Your Concatenate call is not correct, it should be:
combined = Concatenate()([x, dummy])
Related
I have the following neural net model. I have an input to as int sequence. And there is also another two neural nets beginning from same type of input layer and get concatenated together. This concatenation is the final output of the model. If I specified the input of the model as main_input and the entity_extraction and relation_extraction networks also start with main_input and their output is the final output, then does it mean that I have 3 inputs to this model? What is the underlying input/output mechanism in this model?
main_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32', name='main_input')
x = embedding_layer(main_input)
x = CuDNNLSTM(KG_EMBEDDING_DIM, return_sequences=True)(x)
x = Avg(x)
x = Dense(KG_EMBEDDING_DIM)(x)
x = Activation('relu')(x)
# relation_extraction = Reshape([KG_EMBEDDING_DIM])(x)
relation_extraction = Transpose(x)
x = embedding_layer(main_input)
x = CuDNNLSTM(KG_EMBEDDING_DIM, return_sequences=True)(x)
x = Avg(x)
x = Dense(KG_EMBEDDING_DIM)(x)
x = Activation('relu')(x)
# entity_extraction = Reshape([KG_EMBEDDING_DIM])(x)
entity_extraction = Transpose(x)
final_output = Dense(units=20, activation='softmax')(Concatenate(axis=0)([entity_extraction,relation_extraction]))
m = Model(inputs=[main_input], outputs=[final_output])
main_input is the only input into this model. relation_extraction and embedding_layer both use the same input. The output of these two LSTM layers are transposed, concatenated, and passed through a Dense layer to produce the final output.
I'm creating a functional API keras model to regress and predict a numerical value based on two mixed-data inputs.
The network consists of two models, the outputs of which are intended to be concatenated and input into another model, which will output the final value to be compared against the value to predict.
My (unfinished) code looks like this:
def categorical_model():
inputA = Input(shape=(3, 1329, ))
x = Dense(8, activation='relu')(inputA)
x = Dense(4, activation='relu')(x)
return Model(inputs=inputA, outputs=x)
def continuous_model():
inputB = Input(shape=(1329, 2))
y = Dense(64, activation="relu")(inputB)
y = Dense(32, activation="relu")(y)
y = Dense(4, activation="relu")(y)
return Model(inputs=inputB, outputs=y)
cat = categorical_model()
con = continuous_model()
catcon_list = [cat.output, con.output]
concatenated = concatenate(catcon_list, axis=0, name = 'concatenate')
concatenated = Dense(2, activation='softmax')(concatenated)
merged = Model(input=[cat.input,
con.input],
output=concatenated)
merged.summary()
The code results in a ValueError as such:
ValueError: A `Concatenate` layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 3, 4), (None, 1329, 4)]
Can these inputs be concatenated? How can I change the axis to the 2nd dimension?
This should work:
concatenated = Concatenate(axis=1)(catcon_list)
merged = Model([cat.input,
con.input],
concatenated)
I've looked at a few similar questions but I still don't understand how to solve my problem.
I am trying to build a CNN that estimates how many particles hit a detector, based on what's essentially an oscilloscope trace of the energy released in the detector over time.
I have 100,000 events of 1024 time samples, which I split 80/20 as train/test, like so:
from sklearn.model_selection import train_test_split
train_to_test_ratio=0.8 #proportion of the dataset to include in the train split
X_train,X_test,Y_train,Y_test=train_test_split(NormSignals,labels,train_size=train_to_test_ratio)
no_outputs = 14 # maximum number of particles expected
# force the labels to have 14 binary digits, one for each of the possible outputs
Y_train=tf.one_hot(Y_train,no_outputs)
Y_test=tf.one_hot(Y_test,no_outputs)
When I try to define the input shape for the network I do so like this (full CNN code below):
# Define input to neural network (tensors of 1024 time samples x 1 amplitude per sample)
inputs = keras.Input(shape=(1024,1))
But it gives me the error: "Input 0 of layer Conv_1 is incompatible with the layer: expected ndim=4, found ndim=3. Full shape received: [None, 1024, 1]"
I thought the input shape was as simple as the shape of the data arrays being passed to the network. Can someone please explain what the correct shape of my data should be?
Thank you very much in advance!
Full CNN:
from tensorflow import keras
# Following the architecture of the CNN from the image recognition lab (14/5/2020):
# Simple CNN:
class noiseLayer(keras.layers.Layer):
def __init__(self,mean):
super(noiseLayer, self).__init__()
self.mean = mean
def call(self, input):
mean = self.mean
return input + (np.random.poisson(mean))/mean
# Add data augmentation to produce a random flip of the data (the ECal is symmetrical)
# and add poissonian noise to all of the crystals - using large N and dividing by N normalises
# the noise to be approximately continuous between 0 and 1
data_augmentation = keras.Sequential([
noiseLayer(mean = 1000)
], name='DataAugm')
# Define input to neural network (tensors of 1024 time samples x 1 amplitude per sample)
inputs = keras.Input(shape=(1024,1))
#x=inputs
x = data_augmentation(inputs)
# primo blocco Convoluzionale
x = keras.layers.Conv2D(16, kernel_size=(3,3), name='Conv_1')(x)
x = keras.layers.LeakyReLU(0.1)(x)
x = keras.layers.MaxPool2D((2,2), name='MaxPool_1')(x)
# secondo blocco Convoluzionale
x = keras.layers.Conv2D(16, kernel_size=(3,3), name='Conv_2')(x)
x = keras.layers.LeakyReLU(0.1)(x)
x = keras.layers.MaxPool2D((2,2), name='MaxPool_2')(x)
# terzo blocco convoluzionale
x = keras.layers.Conv2D(32, kernel_size=(3,3), name='Conv_3')(x)
x = keras.layers.LeakyReLU(0.1)(x)
x = keras.layers.MaxPool2D((2,2), name='MaxPool_3')(x)
# Flatten output tensor of the last convolutional layer so it can be used as
# input to the dense layers
x = keras.layers.Flatten(name='Flatten')(x)
# dense network: 2 dense hidden layer with 256 neurons, with ReLU activation
# Classifier
x = keras.layers.Dense(64, name='Dense_1')(x)
x = keras.layers.ReLU(name='ReLU_dense_1')(x)
#x = keras.layers.Dropout(0.2)(x)
x = keras.layers.Dense(64, name='Dense_2')(x)
x = keras.layers.ReLU(name='ReLU_dense_2')(x)
outputs = keras.layers.Dense(no_outputs, activation='softmax', name='Output')(x)
# Model definition
model = keras.Model(inputs=inputs, outputs=outputs, name='VGGlike_CNN')
# Print model summary
model.summary()
# Show model structure
keras.utils.plot_model(model, show_shapes=True)
The problem was that I was using 2D layers to try to solve a 1D problem.
Changing all the 2D layers to 1D now compiles without errors:
x = keras.layers.Conv1D(16, kernel_size=(3), name='Conv_1')(x)
x = keras.layers.LeakyReLU(0.1)(x)
x = keras.layers.MaxPool1D((2), name='MaxPool_1')(x)
# secondo blocco Convoluzionale
x = keras.layers.Conv1D(16, kernel_size=(3), name='Conv_2')(x)
x = keras.layers.LeakyReLU(0.1)(x)
x = keras.layers.MaxPool1D((2), name='MaxPool_2')(x)
# terzo blocco convoluzionale
x = keras.layers.Conv1D(32, kernel_size=(3), name='Conv_3')(x)
x = keras.layers.LeakyReLU(0.1)(x)
x = keras.layers.MaxPool1D((2), name='MaxPool_3')(x)
# Flatten output tensor of the last convolutional layer so it can be used as
# input to the dense layers
x = keras.layers.Flatten(name='Flatten')(x)
# dense network: 2 dense hidden layer with 256 neurons, with ReLU activation
# Classifier
x = keras.layers.Dense(64, name='Dense_1')(x)
x = keras.layers.ReLU(name='ReLU_dense_1')(x)
#x = keras.layers.Dropout(0.2)(x)
x = keras.layers.Dense(64, name='Dense_2')(x)
x = keras.layers.ReLU(name='ReLU_dense_2')(x)
I want to train a Keras Model where the input is a vector of size (20, 300).
But the problem is that I need also to feed the model with a fixed list of vectors that should be used on each training step.
the list of vectors is fixed for all training examples.. so Here's what I've tried.
def create_model(num_filters=64, embedding_dim=300, seq_len=20):
# input1 Shape (?,20,300)
input1 = Input(shape=(seq_len,embedding_dim,), dtype='float32') # Input1 taken from the model input
# input2 Shape (5,20,300)
input2=get_input2() # Input2: taken from outside the model
# CNN Encoding of Input 1
convs = []
filter_sizes = [1,2,3]
for fsz in filter_sizes:
x = Conv1D(num_filters, fsz, activation='relu',padding='same')(input1)
x = MaxPooling1D()(x)
convs.append(x)
output1 = Concatenate(axis=-1)(convs)
output1 = Flatten()(output1)
# CNN Encoding of Input 2
convs1 = []
filter_sizes = [1,2,3]
for fsz in filter_sizes:
x1 = Conv1D(num_filters, fsz, activation='relu',padding='same')(input2)
x1 = MaxPooling1D()(x1)
convs1.append(x1)
output2 = Concatenate(axis=-1)(convs1)
output2 = Flatten()(output2)
However this implementation throws a value error.
"ValueError: Layer conv1d_60 was called with an input that isn't a
symbolic tensor. Received type: ."
How this can be done in Keras?
I had a similar problem to the one given in the following link:
ValueError: `decode_predictions` expects a batch of predictions (i.e. a 2D array of shape (samples, 1000)). Found array with shape: (1, 7)
I have been able to successfully run my code and also got the probabilities for each class.My concern is what will be the order of the output probabilities?Meaning how will we know which probability belongs to which class since we are assigning or mapping them manually.any assistance would be really appreciated!
Code:
class_list=style_info['articleType'].unique()
base_model = ResNet50(weights='imagenet',include_top=False,input_shape=(80, 60, 3))
img=glob.glob('images\\'+str(style_info.iloc[55,0])+'.jpg')
# add a global spatial average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)
# add a fully-connected layer
x = Dense(1024, activation='relu')(x)
# and a logistic layer -- let's say we have 7 classes
predictions = Dense(no_of_classes, activation='softmax')(x)
res_model = Model(inputs=base_model.input, outputs=predictions)
image1 = image.load_img(img[0], target_size=(80,60))
x = image.img_to_array(image1)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
preds = res_model.predict(x)
# decode the results into a list of tuples (class, description, probability)
# (one such list for each sample in the batch)
#print('Predicted:', decode_predictions(preds, top=5)[0])
# Predicted: [(u'n02504013', u'Indian_elephant', 0.82658225), (u'n01871265', u'tusker', 0.1122357), (u'n02504458', u'African_elephant', 0.061040461)]
class_list[np.argmax(preds[0])]