How to define a multi-dimensional neural network with keras - python

I have implemented a simple neural network with keras that takes an input of 50 values and returns a classification of '0' or '1'. I believe the model is expecting an input shape of (50, 1). I'd like to add another 50 data values for each input, but I'd like them to be associated with the original 50 respective inputs. So instead of making the input of shape (100, 1), I guess I'd like to make it of shape (50, 2). I would like the neural network to know from the start that each input feature has two values associated with it, instead of it thinking there are 100 separate input features. Here's what I have so far:
model = Sequential()
model.add(Dense(50, input_dim=50, kernel_initializer='normal', activation='relu'))
model.add(Dense(100, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
Can anyone show me the way the alter this structure to accept my new input shape?

Related

how to decide input and output shape in DepthwiseConv2D (CNN) in tensroflow Python

My actual concern is how to choose input and output based on the data I have.
The shape of data is following for x and y -: ((90000, 6), (90000,)). and there are two labels in y.
My data is in CSV file(i am using 6 columns as features, and last column as Label), i am not using IMAGE data
model=models.Sequential()
model.add(tf.keras.layers.DepthwiseConv2D((3, 3), padding='valid', depth_multiplier=10, input_shape=(,)))
# 2 Max Pooling layers and 1 DepthwiseConv2d
model.add(layers.Flatten())
model.add(layers.Dense(200, activation='relu'))
model.add(layers.Dense(2,activation='softmax'))
Can someone tell me how to decide Input shape and what kind of reshaping i should do on data before passing it into the Model?
I am looking for suggestions that how can I decide the Input shape and what should i take care of.
also, let me know if the last layer is correct.
I already posted one problem related to this, but this more simplified version of what i actually want to do.
Thanks in advance.
I am a little confused here with the implementation, why are you using two dimensional CNN's? You can use tf.keras.layers.DepthwiseConv1D?
model = Sequential()
model.add(Conv1D(filters=64, kernel_size=3, activation='relu', input_shape=(x.shape[1],x.shape[2])))
model.add(Conv1D(filters=64, kernel_size=3, activation='relu'))
model.add(Dropout(0.5))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(100, activation='relu'))
model.add(Dense(y.shape[1], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
I think something like that might solve your problem.

Keras conv nn predicting only one class?

So I've been building a convolutional neural network. I'm trying to predict whether a boardgame state (10x10 matrix) will lead to a win (binary 0 or 1) or not.
I have six million examples, which you would think would be enough, but clearly not, as my network is predicting all of one class...
Is there something obvious I'm missing? I tried giving it even 10 examples and it still predicts them all as the same class.
The input matrices are 10x10 of integers.
Input reshaping:
x_train = x_train.reshape(len(x_train),10,10,1)
Actual model building:
model = Sequential()
model.add(Conv2D(3, kernel_size=(1, 1), strides=(1, 1), activation='relu', input_shape=(10,10,1)))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(1, 1)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(500, activation='tanh'))
model.add(Dropout(0.5))
model.add(keras.layers.Dense(75, activation='relu'))
model.add(BatchNormalization())
model.add(keras.layers.Dense(10, activation='sigmoid'))
model.add(keras.layers.Dense(1,kernel_initializer='normal',activation='sigmoid'))
optimizerr = keras.optimizers.SGD(lr=0.001, momentum=0.9, decay=0.01, nesterov=True)
model.compile(optimizer=optimizerr, loss='binary_crossentropy', metrics=[metrics.binary_accuracy])
model.fit(x_train, y_train,epochs = 100, batch_size = 128, verbose=1)
I've tried modifying the learning rate, momentum, decay, the kernel_sizes, layer types, sizes... I checked for dying relu and that didn't seem to be the problem. Removing the dropout/batch normalization layers (or various random layers) didn't do anything either.
The data have roughly 53/47% split across the labels, so it's not that either.
I'm more confused because even when I ask it to predict the train set, it STILL insists on only labeling things one class, even if there are only ~20 samples or fewer.

Input nodes in Keras NN

I am trying to create an neural network based on the iris dataset. I have an input of four dimensions. X = dataset[:,0:4].astype(float). Then, I create a neural network with four nodes.
model = Sequential()
model.add(Dense(4, input_dim=4, init='normal', activation='relu'))
model.add(Dense(3, init='normal', activation='sigmoid'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
As I understand, I pass each dimension to the separate node. Four dimensions - four nodes. When I create a neural network with 8 input nodes, how does it work? Performance still is the same as with 4 nodes.
model = Sequential()
model.add(Dense(8, input_dim=4, init='normal', activation='relu'))
model.add(Dense(3, init='normal', activation='sigmoid'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
You have an error on your last activation. Use softmax instead of sigmoid and run again.
replace
model.add(Dense(3, init='normal', activation='sigmoid'))
with
model.add(Dense(3, init='normal', activation='softmax'))
To answer your main question of "How does this work?":
From a conceptual standpoint, you are initially creating a fully-connected, or Dense, neural network with 3 layers: an input layer with 4 nodes, a hidden layer with 4 nodes, and an output layer with 3 nodes. Each node in the input layer has a connection to every node in the hidden layer, and same with the hidden to the output layer.
In your second example, you just increased the number of nodes in the hidden layer from 4 to 8. A larger network can be good, as it can be trained to "look" for more things in your data. But too large of a layer and you may overfit; this means the network remembers too much of the training data, when it really just needs a general idea of the training data so it can still recognize slightly different data, which is your testing data.
The reason you may not have seen an increase in performance is likely either overfitting or your activation function; Try a function other than relu in your hidden layer. After trying a few different function combinations, if you don't see any improvement, you are likely overfitting.
Hope this helps.

How to train using batch inputs with Keras, but predicting with single example with an LSTM?

I have a a list of training data that I am using to train. However, when I predict, the prediction will be done online with a single example at a time.
If I declare my model with input like the following
model = Sequential()
model.add(Dense(64, batch_input_shape=(100, 5, 1), activation='tanh'))
model.add(LSTM(32, stateful=True))
model.add(Dense(1, activation='linear'))
optimizer = SGD(lr=0.0005)
model.compile(loss='mean_squared_error', optimizer=optimizer)
When I go to predict with a single example of shape (1, 5, 1), it gives the following error.
ValueError: Shape mismatch: x has 100 rows but z has 1 rows
The solution I came up with was to just train my model iteratively using a batch_input_shape of (1,5,1) and calling fit for each single example. This is incredibly slow.
Is there not a way to train on a large batch size, but predict with a single example using LSTM?
Thanks for the help.
Try something like this:
model2 = Sequential()
model2.add(Dense(64, batch_input_shape=(1, 5, 1), activation='tanh'))
model2.add(LSTM(32, stateful=True))
model2.add(Dense(1, activation='linear'))
optimizer2 = SGD(lr=0.0005)
model2.compile(loss='mean_squared_error', optimizer=optimizer)
for nb, layer in enumerate(model.layers):
model2.layers[nb].set_weights(layer.get_weights())
You are simply rewritting weights from one model to another.
You have defined the input_shape in the first layer. Therefore sending a shape that does not match the preset-ed input_shape is in valid.
There are two way to achieve that:
You can modify your model by changing
batch_input_shape=(100, 5, 1)
to
input_shape=(5, 1) to avoid a preset-ed batch size. You can setup the batch_size=100 in model.fit().
Edit: Method 2
You define the exact same model as model2. Then model2.set_weights(model1.get_weights()).
If you want to use stateful==True, you actually want to use the hidden layers from the last batch as the initial states for the next batch. Therefore very batch size should be matched. Otherwise, you can just remove the stateful==True.

Keras : How can I transform my data in order to fit for this specific Deep Learning architecture?

Let's say that my training data is a 3d numpy array with dimensions (4155, 5, 150). This data consists of 4155 training samples, each one featuring a 5*150 matrix, while my labels are 4155. I then want to feed it to the following architecture :
model = Sequential()
model.add(Convolution1D(input_dim=4,
input_length=1000,
nb_filter=320,
filter_length=26,
border_mode="valid",
activation="relu",
subsample_length=1))
model.add(MaxPooling1D(pool_length=13, stride=13))
model.add(Dropout(0.2))
model.add(brnn)
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(input_dim=75*640, output_dim=925))
model.add(Activation('relu'))
model.add(Dense(input_dim=925, output_dim=919))
model.add(Activation('sigmoid'))
The problem is that a) I don't know how to change the dimensionality of my input array, so that it can fit to this model. The above specified layer parameters are just for the example. I simply want to use a Convolutional Layer, followed by a Bidirrectional LSTM and finally two Fully Connected Layers. Does anyone have an idea ?
Thanks in advance !

Categories

Resources