I have 2 train values X_data, B_data. I want 2 shared lstm layers to predict 2 outputs for X_data and B_data
l1 = layers.LSTM(40)(X_data)
flat_layer = Flatten()(l1)
l2 = layers.LSTM(20)(B_data)
flat_layer2 = Flatten()(l2)
output1 = Dense(1, activation='sigmoid')(flat_layer)
output2 = Dense(1, activation='sigmoid')(flat_layer2)
model = keras.Model(inputs=[X_data,B_data], outputs=[output1,output2])
I take this erorr
AttributeError: Tensor.op is meaningless when eager execution is enabled.
Any Suggestions?
The mistake is that keras.Model(inputs) does not take in the input data but the input layer (just as you did correctly with outputs). The data is passed via model.fit(). So first of all, you'll need two Input layers:
X_data = np.random.uniform(0,1,(3,100,40))
B_data = np.random.uniform(0,1,(3,100,20))
y1 = np.random.uniform(0,1,(3,1))
y2 = np.random.uniform(0,1,(3,1))
i1 = Input((100,40)) # you need input layers
i2 = Input((100,20))
l1 = LSTM(40)(i1)
flat_layer = Flatten()(l1)
l2 = LSTM(20)(i2)
flat_layer2 = Flatten()(l2)
output1 = Dense(1, activation='sigmoid')(flat_layer)
output2 = Dense(1, activation='sigmoid')(flat_layer2)
model = tf.keras.Model(inputs=[i1,i2], outputs=[output1,output2])
model.compile('sgd', 'mse')
model.fit(x=[X_data,B_data], y=[y1,y2]) # this is where you pass input (data) and output (labels)
My dataframe just like this and i convert for input data
trainxx=np.array(trainn3)
X_data = trainxx.reshape((trainxx.shape[0], 1, trainxx.shape[1]))
y values are numpy array
ytrainxx=np.array(ytrains)
And your input solution i can't convert it
Related
I have the following neural net model. I have an input to as int sequence. And there is also another two neural nets beginning from same type of input layer and get concatenated together. This concatenation is the final output of the model. If I specified the input of the model as main_input and the entity_extraction and relation_extraction networks also start with main_input and their output is the final output, then does it mean that I have 3 inputs to this model? What is the underlying input/output mechanism in this model?
main_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32', name='main_input')
x = embedding_layer(main_input)
x = CuDNNLSTM(KG_EMBEDDING_DIM, return_sequences=True)(x)
x = Avg(x)
x = Dense(KG_EMBEDDING_DIM)(x)
x = Activation('relu')(x)
# relation_extraction = Reshape([KG_EMBEDDING_DIM])(x)
relation_extraction = Transpose(x)
x = embedding_layer(main_input)
x = CuDNNLSTM(KG_EMBEDDING_DIM, return_sequences=True)(x)
x = Avg(x)
x = Dense(KG_EMBEDDING_DIM)(x)
x = Activation('relu')(x)
# entity_extraction = Reshape([KG_EMBEDDING_DIM])(x)
entity_extraction = Transpose(x)
final_output = Dense(units=20, activation='softmax')(Concatenate(axis=0)([entity_extraction,relation_extraction]))
m = Model(inputs=[main_input], outputs=[final_output])
main_input is the only input into this model. relation_extraction and embedding_layer both use the same input. The output of these two LSTM layers are transposed, concatenated, and passed through a Dense layer to produce the final output.
I want to make a model like the below picture. (simplified)
So, practically, I want the weights with the same names to always have the same values during training. What I did was the code below:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
example_train_features = np.arange(12000).reshape(1000, 12)
example_lanbels = np.random.randint(2, size=1000) #these data are just for illustration purposes
train_ds = tf.data.Dataset.from_tensor_slices((example_train_features, example_lanbels)).shuffle(buffer_size = 1000).batch(32)
dense1 = layers.Dense(1, activation="relu") #input shape:4
dense2 = layers.Dense(2, activation="relu") #input shape:1
dense3 = layers.Dense(1, activation="sigmoid") #input shape:6
feature_input = keras.Input(shape=(12,), name="features")
nodes_list = []
for i in range(3):
first_lvl_input = feature_input[i :: 4] ######## marked line
out1 = dense1(first_lvl_input)
out2 = dense2(out1)
nodes_list.append(out2)
joined = layers.concatenate(nodes_list)
final_output = dense3(joined)
model = keras.Model(inputs = feature_input, outputs = final_output, name="extrema_model")
compile_and_fit(model, train_ds, val_ds, patience=4)
model.compile(loss = tf.keras.losses.BinaryCrossentropy(),
optimizer = tf.keras.optimizers.RMSprop(),
metrics=keras.metrics.BinaryAccuracy())
history = model.fit(train_ds, epochs=10, validation_data=val_ds)
But when I try to run this code I get this error:
MklConcatOp : Dimensions of inputs should match: shape[0][0]= 71 vs. shape[18][0] = 70
[[node extrema_model/concatenate_2/concat (defined at <ipython-input-373-5efb41d312df>:398) ]] [Op:__inference_train_function_15338]
(please don't pay attention to numbers as they are from my real code) this is because it gets the whole data including the labels as an input, but shouldn't Keras only feed the features itself? Anyway, if I write the marked line as below:
first_lvl_input = feature_input[i :12: 4]
it doesn't give me the above error anymore. But, then I get another error which I know why happens but I don't know how to resolve it.
InvalidArgumentError: Incompatible shapes: [4,1] vs. [32,1]
[[node gradient_tape/binary_crossentropy/logistic_loss/mul/BroadcastGradientArgs
(defined at <ipython-input-1-b82546367b3c>:398) ]] [Op:__inference_train_function_6098]
This is because keras is feeding again the whole batch array, whereas in Keras documentation it is written you shouldn't specify the batch dimension for the program as it understands itself, so I expected Keras to feed the data one by one for my code to work. So I appreciate any ideas on how to resolve this or on how to write a code for what I want. Thanks.
You can wrap the dense layers in timedistributed wrapper , and reshape your data to have three dimensions (1000,3,4)(batch, sequence, feature), so for each time step (=3 that replace your for loop code .) the four features will be multiplied with the same weights each time.
example_train_features = np.arange(12000).reshape(1000, 3, 4 )
example_lanbels = np.random.randint(2, size=1000) #these data are just for illustration purposes
train_ds = tf.data.Dataset.from_tensor_slices((example_train_features, example_lanbels)).shuffle(buffer_size = 1000).batch(32)
dense1 = layers.TimeDistributed(layers.Dense(1, activation="relu")) #input shape:4
dense2 =layers.TimeDistributed(layers.Dense(2, activation="relu")) #input shape:1
dense3 = layers.Dense(1, activation="sigmoid") #input shape:6
feature_input = keras.Input(shape=(3,4), name="features")
out1 = dense1(feature_input)
out2 = dense2(out1)
z = layers.Flatten()(out2)
final_output = dense3(z)
model = keras.Model(inputs = feature_input, outputs = final_output, name="extrema_model")
model.compile(loss = tf.keras.losses.BinaryCrossentropy(),
optimizer = tf.keras.optimizers.RMSprop(),
metrics=keras.metrics.BinaryAccuracy())
history = model.fit(train_ds, epochs=10)
I got stuck in trying to create a model in Keras representing a function a(x)u+b(x) where a(x) and b(x) are 2 nonlinear functions that I want to do regression and u is a changing variable. Below is the code of doing regression for a(x)+b(x) using Keras concatenation if we don't have "u". But how we should insert this u into the structure so that it becomes a(x)u+b(x). Thanks.
def build_model():
A = Input(shape=[3])
B = Input(shape=[3])
a1 = Dense(32, activation='relu')(A)
b1 = Dense(32, activation='relu')(B)
c = concatenate([a1, b1])
O = Dense(1, activation='linear')(c)
model = Model(inputs=[A, B], outputs=O)
model.compile()
return model
I think you can do the following. Note that there is a single input, and that input is connected to two separate dense layers.
import tensorflow as tf
tfk = tf.keras
tfkl = tfk.layers
def build_model(u):
"""Return model representing `a(x)u+b(x)`."""
inputs = tfkl.Input(shape=[3])
ax = tfkl.Dense(32, activation='relu')(inputs)
axu = tfkl.Lambda(lambda x: x * u)(ax)
bx = tfkl.Dense(32, activation='relu')(inputs)
x = tfkl.Concatenate()([axu, bx])
x = tfkl.Dense(1, activation='linear')(x)
model = tfk.Model(inputs=inputs, outputs=x)
return model
In your original code, you used a Concatenate layer. Are you sure you want that and not a elementwise sum?
I use GloVe embeddings to convert texts into vectors to predict a binary sentiment. I want to consider a dummy variable in my NN as well (published in winter=0, summer=1).
I read some sources on multiple inputs but I get
ValueError: Unexpectedly found an instance of type `<class 'keras.layers.merge.Concatenate'>`. Expected a symbolic tensor instance. .... Layer output was called with an input that isn't a symbolic tensor. Received type: <class 'keras.layers.merge.Concatenate'>. Full input: [<keras.layers.merge.Concatenate object at 0x7f2b9b677d68>]
My network looks like this:
sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedded_sequences = embedding_layer(sequence_input)
dummy= Input(shape=(1,), dtype='int32', name='dummy')
x = Conv1D(100, 20, activation='relu')(embedded_sequences) # filter= 100, kernel=20
x = MaxPooling1D(5)(x) # reduces output to 1/5 of original data by taking only max values
x = Conv1D(100, 20, activation='relu')(x)
x = GlobalAveragePooling1D()(x) # global max pooling
x = Dropout(0.5)(x)
x = Dense(100, activation='relu')(x)
combined = Concatenate([x, dummy])
preds = Dense(1, activation='sigmoid',name='output')(combined)
model = Model(inputs=[sequence_input,dummy], outputs=[preds])
print(model.summary())
I feel Im missing something essential but cant figure out what..
text + dummy --> binary_prediction
Your Concatenate call is not correct, it should be:
combined = Concatenate()([x, dummy])
I want to train a Keras Model where the input is a vector of size (20, 300).
But the problem is that I need also to feed the model with a fixed list of vectors that should be used on each training step.
the list of vectors is fixed for all training examples.. so Here's what I've tried.
def create_model(num_filters=64, embedding_dim=300, seq_len=20):
# input1 Shape (?,20,300)
input1 = Input(shape=(seq_len,embedding_dim,), dtype='float32') # Input1 taken from the model input
# input2 Shape (5,20,300)
input2=get_input2() # Input2: taken from outside the model
# CNN Encoding of Input 1
convs = []
filter_sizes = [1,2,3]
for fsz in filter_sizes:
x = Conv1D(num_filters, fsz, activation='relu',padding='same')(input1)
x = MaxPooling1D()(x)
convs.append(x)
output1 = Concatenate(axis=-1)(convs)
output1 = Flatten()(output1)
# CNN Encoding of Input 2
convs1 = []
filter_sizes = [1,2,3]
for fsz in filter_sizes:
x1 = Conv1D(num_filters, fsz, activation='relu',padding='same')(input2)
x1 = MaxPooling1D()(x1)
convs1.append(x1)
output2 = Concatenate(axis=-1)(convs1)
output2 = Flatten()(output2)
However this implementation throws a value error.
"ValueError: Layer conv1d_60 was called with an input that isn't a
symbolic tensor. Received type: ."
How this can be done in Keras?