I want to build a model using Conv2D which will find vertical and horizontal patterns which describes the relationship between a series of features in a 2D matrix with Height=N and Width=6. This is not an image, but a list of events which consists from some variables, represented as a 2D matrix, each event has length=6 (6 variables), and the matrix size is Nx6. N is not a constant value.
I want the model to find some patterns using vertically and horizontally kernels like this:
Here is my code:
x = tf.keras.Input(shape=(None, 6, 1), name='input')
h1 = tf.keras.layers.Conv2D(128, (1, 6), name="conv2d_1", strides=(1, 1), activation="relu")(x)
h2 = tf.keras.layers.Conv2D(128, (6, 1), name="conv2d_2", strides=(1, 1), activation="relu")(x)
h1 = tf.keras.layers.Reshape((1, 128))(h1)
h2 = tf.keras.layers.Reshape((1, 6*128))(h2)
h3 = tf.keras.layers.Concatenate()([h1, h2])
h4 = tf.keras.layers.Dense(100,name='dense-1', activation='relu')(h3)
y1 = tf.keras.layers.Dense(1, name='output1', activation='sigmoid')(h4)
y2 = tf.keras.layers.Dense(1, name='output2', activation='linear')(h4)
net = tf.keras.Model(inputs=[x], outputs=[y1, y2])
opt = tf.keras.optimizers.get('Adagrad')
opt.learning_rate = 0.003
net.compile(loss='mean_squared_error', optimizer=opt, metrics=['accuracy'])
net.summary()
for i in range(len(exp)):
x = np.array(exp[i]).reshape((1,len(exp[i]),6,1))
loss = net.train_on_batch(x, np.array([[exp[i]] + [np.sum(x, where=[False, False, False, False, False, True])]]))
clear_output(wait=True)
print(loss)
I tried different ways to build this model, but cannot find a correct way, and each time I get different errors, the last one is:
non-broadcastable operand with shape (1,151,6,1) doesn't match the broadcast shape (1,151,6,6)
Please advice how to solve this. I realy want to go this way because I want to inspect this model and results, if it is able to find the horizontal and vertical patterns for correlation distribution of event features that have variables in series with other events variables.
Related
I was wondering if you would be able to help me with an errors that I am getting in the code that I am writing.
I have 2 datasets as inputs and 1 other as taget dataset. All datasets are set of images in dimantion of (17, 20, 1).
I set a code as:
from tensorflow.keras.layers import Concatenate
from tensorflow.keras import Model
# define two sets of inputs
inputA = Input(shape=(17, 20, 1))
inputB = Input(shape=(17, 20, 1))
# the first branch operates on the first input
x = Sequential()(inputA)
x = Conv2D(filters=64, kernel_size=(3,3), activation='relu')(x)
x = Model(inputs=inputA, outputs=x)
# the second branch opreates on the second input
y = Sequential()(inputB)
y = Conv2D(filters=64, kernel_size=(3,3), activation='relu')(y)
y = Model(inputs=inputB, outputs=y)
# combine the output of the two branches
combined = Concatenate()([x.output, y.output])
# apply a FC layer and then a regression prediction on the
# combined outputs
z = Sequential()(combined)
z = Conv2D(filters=64, kernel_size=(3,3), activation='relu')(z)
# our model will accept the inputs of the two branches and
# then output a single value
model = Model(inputs=[x.input, y.input], outputs=z)
model.compile(loss="mean_absolute_percentage_error", optimizer='adam', metrics=['accuracy'])
history = model.fit([input1_train, input2_train], target_train, validation_data=([input1_test, input2_test], target_test), epochs=100, verbose=0)
then I get the error as:
ValueError: Dimensions must be equal, but are 17 and 13 for '{{node mean_absolute_percentage_error/sub}} = Sub[T=DT_FLOAT](IteratorGetNext:1, model_18/conv2d_18/Relu)' with input shapes: [?,17,20,1], [?,13,16,64].
I also test this code for images with shape (20, 20, 1), and get the code as:
ValueError: Dimensions must be equal, but are 20 and 16 for '{{node mean_absolute_percentage_error/sub}} = Sub[T=DT_FLOAT](IteratorGetNext:1, model_15/conv2d_15/Relu)' with input shapes: [?,20,20,1], [?,16,16,64].
but when I set the kernel size as (1,1) the code run with no problem.
Does anyone know where the problem comes from? And what should I do?
I'll very thanks if anyone can help me fix it.
I got an error while running a python code
I have a question about the input and output (layer) of a DQN.
e.g
Two points: P1(x1, y1) and P2(x2, y2)
P1 has to walk towards P2
I have the following information:
Current position P1 (x/y)
Current position P2 (x/y)
Distance to P1-P2 (x/y)
Direction to P1-P2 (x/y)
P1 has 4 possible actions:
Up
Down
Left
Right
How do I have to setup the input and output layer?
4 input nodes
4 output nodes
Is that correct?
What do I have to do with the output?
I got 4 arrays with 4 values each as output.
Is doing argmax on the output correct?
Edit:
Input / State:
# Current position P1
state_pos = [x_POS, y_POS]
state_pos = np.asarray(state_pos, dtype=np.float32)
# Current position P2
state_wp = [wp_x, wp_y]
state_wp = np.asarray(state_wp, dtype=np.float32)
# Distance P1 - P2
state_dist_wp = [wp_x - x_POS, wp_y - y_POS]
state_dist_wp = np.asarray(state_dist_wp, dtype=np.float32)
# Direction P1 - P2
distance = [wp_x - x_POS, wp_y - y_POS]
norm = math.sqrt(distance[0] ** 2 + distance[1] ** 2)
state_direction_wp = [distance[0] / norm, distance[1] / norm]
state_direction_wp = np.asarray(state_direction_wp, dtype=np.float32)
state = [state_pos, state_wp, state_dist_wp, state_direction_wp]
state = np.array(state)
Network:
def __init__(self):
self.q_net = self._build_dqn_model()
self.epsilon = 1
def _build_dqn_model(self):
q_net = Sequential()
q_net.add(Dense(4, input_shape=(4,2), activation='relu', kernel_initializer='he_uniform'))
q_net.add(Dense(128, activation='relu', kernel_initializer='he_uniform'))
q_net.add(Dense(128, activation='relu', kernel_initializer='he_uniform'))
q_net.add(Dense(4, activation='linear', kernel_initializer='he_uniform'))
rms = tf.optimizers.RMSprop(lr = 1e-4)
q_net.compile(optimizer=rms, loss='mse')
return q_net
def random_policy(self, state):
return np.random.randint(0, 4)
def collect_policy(self, state):
if np.random.random() < self.epsilon:
return self.random_policy(state)
return self.policy(state)
def policy(self, state):
# Here I get 4 arrays with 4 values each as output
action_q = self.q_net(state)
Adding input_shape=(4,2) in the first Dense layer is causing the output shape to be (None, 4, 4).
Defining q_net the following way solves it:
q_net = Sequential()
q_net.add(Reshape(target_shape=(8,), input_shape=(4,2)))
q_net.add(Dense(128, activation='relu', kernel_initializer='he_uniform'))
q_net.add(Dense(128, activation='relu', kernel_initializer='he_uniform'))
q_net.add(Dense(128, activation='relu', kernel_initializer='he_uniform'))
q_net.add(Dense(4, activation='linear', kernel_initializer='he_uniform'))
rms = tf.optimizers.RMSprop(lr = 1e-4)
q_net.compile(optimizer=rms, loss='mse')
return q_net
Here, q_net.add(Reshape(target_shape=(8,), input_shape=(4,2))) reshapes the (None, 4, 2) input to (None, 8) [Here, None represents the batch shape].
To verify, print q_net.output_shape and it should be (None, 4) [Whereas in the previous case it was (None, 4, 4)].
You also need to do one more thing. Recall that input_shape does not take batch shape into account. What I mean is, input_shape=(4,2) expects inputs of shape (batch_shape, 4, 2). Verify it by printing q_net.input_shape and it should output (None, 4, 2). Now, what you have to do is - add a batch dimension to your input. Simply you can do the following:
state_with_batch_dim = np.expand_dims(state,0)
And pass state_with_batch_dim to q_net as input. For example, you can call the policy method you wrote like policy(np.expand_dims(state,0)) and get an output of dimension (batch_shape, 4) [in this case (1,4)].
And here are the answers to your initial questions:
Your output layer should have 4 nodes (units).
Your first dense layer does not necessarily have to have 4 nodes (units). If you consider the Reshape layer, the notion of nodes or units does not fit there. You can think of the Reshape layer as a placeholder that takes a tensor of shape (None, 4, 2) and outputs a reshaped tensor of shape (None, 8).
Now, you should get outputs of shape (None, 4) - there, the 4 values represent the q-values of 4 corresponding actions. No need to do argmax here to find the q-values.
It could make sense to feed the DQN some information on the direction it's currently facing too. You could set it up as (Current Pos X, Current Pos Y, X From Goal, Y From Goal, Direction).
The output layer should just be (Up, Left, Down, Right) in an order you determine. An Argmax layer is suitable for the problem. Exact code depends on if you using TF / Pytorch.
I'm building a neural network using keras and I'm a little lost on the LSTM layer input shape. Below is an image of the relevant part.
Both towers are similar with the only difference that the left accepts sequences of any length and the right only accepts sequences of length 5. This results in their LSTM layers receiving an ambiguous sequence length and a sequence length of 4 respectively, both with 8 features per timestep. I'd thus expect both LSTM layers should have an input_shape of (1,8).
My confusion now comes from the fact that both LSTM layers will accept any input shape without a problem, which is why I think this might not work the way I think it does. I'd expect the right LSTM layer to require an input shape with the first dimension either 1, 2 or 4 as only these sizes would be able to divide the input sequence of 4. Further, I'd expect both to require the second dimension to always be 8.
Could someone explain why the LSTM layers can accept any input shape and if they process the sequnces correctly with an input_shape=(1,8)? Below is the relevant code.
# Tower 1
inp_sentence1 = Input(shape=(None, 300, 1))
conv11 = Conv2D(32, (2, 300))(inp_sentence1)
reshape11 = K.squeeze(conv11, 2)
maxpl11 = MaxPooling1D(4, data_format='channels_first')(reshape11)
lstm11 = LSTM(units=6, input_shape=(1,8))(maxpl11)
# Tower 2
inp_sentence2 = Input(shape=(5, 300, 1))
conv21 = Conv2D(32, (2, 300))(inp_sentence2)
reshape21 = Reshape((4,32))(conv21)
maxpl21 = MaxPooling1D(4, data_format='channels_first')(reshape21)
lstm21 = LSTM(units=6, input_shape=(1,8))(maxpl21)
EDIT: Short reproduction of problem on dummy data:
# Tower 1
inp_sentence1 = Input(shape=(None, 300, 1))
conv11 = Conv2D(32, (2, 300))(inp_sentence1)
reshape11 = K.squeeze(conv11, 2)
maxpl11 = MaxPooling1D(4, data_format='channels_first')(reshape11)
lstm11 = LSTM(units=6, input_shape=(1,8))(maxpl11)
# Tower 2
inp_sentence2 = Input(shape=(5, 300, 1))
conv21 = Conv2D(32, (2, 300))(inp_sentence2)
reshape21 = Reshape((4,32))(conv21)
maxpl21 = MaxPooling1D(4, data_format='channels_first')(reshape21)
lstm21 = LSTM(units=6, input_shape=(1,8))(maxpl21)
# Combine towers
substract = Subtract()([lstm11, lstm21])
dense = Dense(16, activation='relu')(substract)
final = Dense(1, activation='sigmoid')(dense)
# Build model
model = Model([inp_sentence1, inp_sentence2], final)
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
# Create data
random_length = random.randint(2, 10)
x1 = numpy.random.random((100, random_length, 300))
x2 = numpy.random.random((100, 5, 300))
y = numpy.random.randint(2, size=100)
# Train and predict on data
model.fit([x1, x2], y, epochs=10, batch_size=5)
prediction = model.predict([x1, x2])
prediction = [round(x) for [x] in prediction]
classification = prediction == y
print("accuracy:", sum(classification)/len(prediction))
I am trying to develop a 1D convolutional neural network with residual connections and batch-normalization based on the paper Cardiologist-Level Arrhythmia Detection with Convolutional Neural Networks, using keras.
This is the code so far:
# define model
x = Input(shape=(time_steps, n_features))
# First Conv / BN / ReLU layer
y = Conv1D(filters=n_filters, kernel_size=n_kernel, strides=n_strides, padding='same')(x)
y = BatchNormalization()(y)
y = ReLU()(y)
shortcut = MaxPooling1D(pool_size = n_pool)(y)
# First Residual block
y = Conv1D(filters=n_filters, kernel_size=n_kernel, strides=n_strides, padding='same')(y)
y = BatchNormalization()(y)
y = ReLU()(y)
y = Dropout(rate=drop_rate)(y)
y = Conv1D(filters=n_filters, kernel_size=n_kernel, strides=n_strides, padding='same')(y)
# Add Residual (shortcut)
y = add([shortcut, y])
# Repeated Residual blocks
for k in range (2,3): # smaller network for testing
shortcut = MaxPooling1D(pool_size = n_pool)(y)
y = BatchNormalization()(y)
y = ReLU()(y)
y = Dropout(rate=drop_rate)(y)
y = Conv1D(filters=n_filters * k, kernel_size=n_kernel, strides=n_strides, padding='same')(y)
y = BatchNormalization()(y)
y = ReLU()(y)
y = Dropout(rate=drop_rate)(y)
y = Conv1D(filters=n_filters * k, kernel_size=n_kernel, strides=n_strides, padding='same')(y)
y = add([shortcut, y])
z = BatchNormalization()(y)
z = ReLU()(z)
z = Flatten()(z)
z = Dense(64, activation='relu')(z)
predictions = Dense(classes, activation='softmax')(z)
model = Model(inputs=x, outputs=predictions)
# Compiling
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['categorical_accuracy'])
# Fitting
model.fit(train_x, train_y, epochs=n_epochs, batch_size=n_batch)
And this is the graph of a simplified model of what I am trying to build.
The model described in the paper uses an incrementing number of filters:
The network consists of 16 residual blocks with 2 convolutional layers per block. The convolutional layers all have a filter length of 16 and have 64k filters, where k starts out as 1 and is incremented every 4-th residual block. Every alternate residual block subsamples its inputs by a factor of 2, thus the original input is ultimately subsampled by a factor of 2^8. When a residual block subsamples the input, the corresponding shortcut connections also subsample their input using a Max Pooling operation with the same subsample factor.
But I can only make it work if I use the same number of filters in every Conv1D layer, with k=1, strides=1 and padding=same, without applying any MaxPooling1D. Any changes in these parameters causes a tensor size mismatch and failure to compile with the following error:
ValueError: Operands could not be broadcast together with shapes (70, 64) (70, 128)
Does anyone have any idea on how to fix this size mismatch and make it work?
In addition, if the input has more than one channel (or features) the mismatch is even worst! Is there a way to deal with more than one channel?
The issue of tensor shape mismatch should be happening in add([y, shortcut]) layer. Because of the fact that you are using MaxPooling1D layer, this halves your time-steps by default, which you can change it by using the pool_size parameter. On the other hand, your residual portion is not reducing the time-steps by same amount. You should apply stride=2 with padding='same' before adding shortcut and y in any one of Conv1D layer (preferably the last one).
For reference, you can check out the Resnet code here Keras-applications-github
I am building image classifier with localisation using CNN.
My CNN has image as input, however after last CONV layer i want to split it into two , one part for image classification, and next part for image localisation.
Needless to say one part should use mean squared error, another one should use binary binary_crossentropy. My structure is something like:
input_image = Input(shape=(IMG_W, IMG_H, 3))
# Layer 1
x = Conv2D(32, (3,3), strides=(1,1), padding='same', name='conv_1', use_bias=False)(input_image)
x = BatchNormalization(name='norm_1')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 2
x = Conv2D(64, (3,3), strides=(1,1), padding='same', name='conv_2', use_bias=False)(x)
x = BatchNormalization(name='norm_2')(x)
x = LeakyReLU(alpha=0.1)(x)
now i want to divied it into two Dense (FC) layer
class_layer = x
class_layer = Dense(256,activation="relu")(class_layer)
class_layer = Dense(2,activation="softmax")(class_layer)
model_one = Model(input_image,class_layer)
model_one.compile(loss="binary_crossentrophy", optimizer=keras.optimizers.Adam(),metrics=['accuracy'])
and layer for image localisation
x = Dense(1024,activation="relu")(x)
x = Dense(256,activation="relu")(x)
x = Dense(4,activation="relu")(x)
model = Model(input_image,x)
model.compile(loss="mean_squared_error", optimizer=keras.optimizers.Adam(),metrics=['accuracy'])
However how can i concat the layes so the result vector will be ( 2 + 4 ) ?
Can i even achieve splitting like this?
I know about model.concatenate However this should be called before compiling, so each part wouldnt have different loss function
Thanks for help and answers
You can initialize your model with multiple outputs, and specify losses for each of them. If you want your loss from model_one to have weight a, and the loss from model to have weight b, so your total loss would look like a*mse + b*binary_ce, then you would have something like
model = Model(input_image, [x, class_layer])
model.compile(loss=['mean_squared_error', 'binary_crossentropy'],
loss_weights=[a, b],
optimizer=keras.optimizers.Adam())
See the loss and loss_weights parameters in the documentation for Model.compile for more details https://keras.io/models/model/.