How do I train multiple neural nets simultaneously in keras? - python

How do I train 1 model multiple times and combine them at the output layer?
For example:
model_one = Sequential() #model 1
model_one.add(Convolution2D(32, 3, 3, activation='relu', input_shape=(1,28,28)))
model_one.add(Flatten())
model_one.add(Dense(128, activation='relu'))
model_two = Sequential() #model 2
model_two.add(Dense(128, activation='relu', input_shape=(784)))
model_two.add(Dense(128, activation='relu'))
model_???.add(Dense(10, activation='softmax')) #combine them here
model.compile(loss='categorical_crossentropy', #continu together
optimizer='adam',
metrics=['accuracy'])
model.fit(X_train, Y_train, #continu together somehow, even though this would never work because X_train and Y_train have wrong formats
batch_size=32, nb_epoch=10, verbose=1)
I've heard I can do this through a graph model but I can't find any documentation on it.
EDIT: in reply to the suggestion below:
A1 = Conv2D(20,kernel_size=(5,5),activation='relu',input_shape=( 28, 28, 1))
---> B1 = MaxPooling2D(pool_size=(2,2))(A1)
throws this error:
AttributeError: 'Conv2D' object has no attribute 'get_shape'

Graph notation would do it for you. Essentially you give every layer a unique handle then link back to the previous layer using the handle in brackets at the end:
layer_handle = Layer(params)(prev_layer_handle)
Note that the first layer must be an Input(shape=(x,y)) with no prior connection.
Then when you make your model you need to tell it that it expects multiple inputs with a list:
model = Model(inputs=[in_layer1, in_layer2, ..], outputs=[out_layer1, out_layer2, ..])
Finally when you train it you also need to provide a list of input and output data that corresponds with your definition:
model.fit([x_train1, x_train2, ..], [y_train1, y_train2, ..])
Meanwhile everything else is the same so you just need to combine together the above to give you the network layout that you want:
from keras.models import Model
from keras.layers import Input, Convolution2D, Flatten, Dense, Concatenate
# Note Keras 2.02, channel last dimension ordering
# Model 1
in1 = Input(shape=(28,28,1))
model_one_conv_1 = Convolution2D(32, (3, 3), activation='relu')(in1)
model_one_flat_1 = Flatten()(model_one_conv_1)
model_one_dense_1 = Dense(128, activation='relu')(model_one_flat_1)
# Model 2
in2 = Input(shape=(784, ))
model_two_dense_1 = Dense(128, activation='relu')(in2)
model_two_dense_2 = Dense(128, activation='relu')(model_two_dense_1)
# Model Final
model_final_concat = Concatenate(axis=-1)([model_one_dense_1, model_two_dense_2])
model_final_dense_1 = Dense(10, activation='softmax')(model_final_concat)
model = Model(inputs=[in1, in2], outputs=model_final_dense_1)
model.compile(loss='categorical_crossentropy', #continu together
optimizer='adam',
metrics=['accuracy'])
model.fit([X_train_one, X_train_two], Y_train,
batch_size=32, nb_epoch=10, verbose=1)
Documentation can be found in the Functional Model API. I'd recommend reading around other questions or checking out Keras' repo as well since the documentation currently doesn't have many examples.

Related

How do I code input layer in Deep Learning using Keras (Basic)

Okay so I'm pretty new to deep learning and have a very basic doubt. I have an input data with an array containing 255 data (Araay shape (255,)) in epochs_data and their corresponding labels in new_labels (Array shape (255,)).
I split the data using the following code:
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(epochs_data, new_labels, test_size = 0.2, random_state=30)
I'm using a sequential model:
from keras.models import Sequential
from keras import layers
from keras.layers import Dense, Activation, Flatten
model = Sequential()
I know how to code for the hidden layers and output layer:
model.add(Dense(500, activation='relu')) #Hidden Layer
model.add(Dense(2, activation='softmax')) #Output Layer
But I don't know how to code layer for input with the input_shape specified. The X_train is the input.It's an array of shape (180,). Also tell me how to code the model.fit() for the same. Any help is appreciated.
You have to copy this line before the hidden layer. You can add the activation function that you want. Finally, as you can see this line represent both the input layer and the 1° hidden layer (you have to choose the n° of neuron (I put 100) )
model.add(Dense(100, input_shape = (X_train.shape[1],))
EDIT:
Before fitting your model you have to configure your model with this line:
model.compile(loss = 'mse', optimizer = 'Adam', metrics = ['mse'])
So you have to choose a metric that in this case is Mean Squarred Error and an optimizer like Adam, Adamax, ect.
Then you can fit your model choosing the data (X,Y), n° epochs, val_split and the batch size.
history = model.fit(X_train, y_train, epochs = 200,
validation_split = 0.1, batch_size=250)

Keras model only working with Sigmoid activation

I'm building a Keras model to categorise data into one of 9 categories. The issue is it will only work with a Sigmoid activation which is designed for binary outputs, other activations result in 0 accuracy. What would I need to change for it to classify into each of the labels?
#Reshape data to add new dimension
X_train = X_train.reshape((100, 150, 1))
Y_train = X_train.reshape((100, 1, 1))
model = Sequential()
model.add(Conv1d(1, kernel_size=3, activation='relu', input_shape=(None, 1)))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='categorical_hinge', optimizer='adam', metrics=['accuracy'])
model.fit(x=X_train,y=Y_train, epochs=200, batch_size=20)
A single-unit dense layer is not what we use in the case of multi-class classification; you should first ensure that your Y data are one-hot encoded - if not, you can make them so using Keras utility functions:
num_classes=9
Y_train = keras.utils.to_categorical(Y_train, num_classes)
and then change your last layer to:
model.add(Dense(num_classes))
model.add(Activation('softmax'))
Also, if you don't have any specific reasons to use the categorical Hinge loss, I would suggest starting with loss='categorical_crossentropy' in your model compilation.
That said, your model seems too simple, and you may want to try adding some more layers...

How to convert 1D flattened MNIST Keras to LSTM model without unflattening?

I want to change my model architecture a bit on the LSTM so it accepts the same exact flattened inputs the full connected approach does.
Working Dnn model from Keras examples
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.utils import to_categorical
# import the data
from keras.datasets import mnist
# read the data
(x_train, y_train), (x_test, y_test) = mnist.load_data()
num_pixels = x_train.shape[1] * x_train.shape[2] # find size of one-dimensional vector
x_train = x_train.reshape(x_train.shape[0], num_pixels).astype('float32') # flatten training images
x_test = x_test.reshape(x_test.shape[0], num_pixels).astype('float32') # flatten test images
# normalize inputs from 0-255 to 0-1
x_train = x_train / 255
x_test = x_test / 255
# one hot encode outputs
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
num_classes = y_test.shape[1]
print(num_classes)
# define classification model
def classification_model():
# create model
model = Sequential()
model.add(Dense(num_pixels, activation='relu', input_shape=(num_pixels,)))
model.add(Dense(100, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
# compile model
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
return model
# build the model
model = classification_model()
# fit the model
model.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=10, verbose=2)
# evaluate the model
scores = model.evaluate(x_test, y_test, verbose=0)
Same problem but trying LSTM (syntax error still)
def kaggle_LSTM_model():
model = Sequential()
model.add(LSTM(128, input_shape=(x_train.shape[1:]), activation='relu', return_sequences=True))
# What does return_sequences=True do?
model.add(Dropout(0.2))
model.add(Dense(32, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='softmax'))
opt = tf.keras.optimizers.Adam(lr=1e-3, decay=1e-5)
model.compile(loss='sparse_categorical_crossentropy', optimizer=opt,
metrics=['accuracy'])
return model
model_kaggle_LSTM = kaggle_LSTM_model()
# fit the model
model_kaggle_LSTM.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=10, verbose=2)
# evaluate the model
scores = model_kaggle_LSTM.evaluate(x_test, y_test, verbose=0)
Problem is here:
model.add(LSTM(128, input_shape=(x_train.shape[1:]), activation='relu', return_sequences=True))
ValueError: Input 0 is incompatible with layer lstm_17: expected
ndim=3, found ndim=2
If I go back and don't flatten x_train and y_train, it works. However, I'd like this to be "just another model choice" that feeds off the same pre-processed input. I thought passing shape[1:] would work as that it the real flattened input_shape. I'm sure it's something easy I'm missing about the dimensionality, but I couldn't get it after an hour of twiddling and debugging, although did figure out not flattening the 28x28 to 784 works, but I don't understand why it works. Thanks a lot!
For bonus points, an example of how to do either DNN or LSTM in either 1D (784,) or 2D (28, 28) would be the best.
RNN layers such as LSTM are meant for sequence processing (i.e. a series of vectors which their order of appearance matters). You can look at an image from top to bottom, and consider each row of pixels as a vector. Therefore, the image would be a sequence of vectors and can be fed to the RNN layer. Therefore, according to this description, you should expect that the RNN layer take an input of shape (sequence_length, number_of_features). That's why when you feed the images to the LSTM network in their original shape, i.e. (28,28), it works.
Now if you insist on feeding the LSTM model the flattened image, i.e. with shape (784,), you have at least two options: either you can consider this as a sequence of length one, i.e. (1, 748), which does not make much sense; or you can add a Reshape layer to your model to reshape back the input to its original shape suitable for the input shape of a LSTM layer, like this:
from keras.layers import Reshape
def kaggle_LSTM_model():
model = Sequential()
model.add(Reshape((28,28), input_shape=x_train.shape[1:]))
# the rest is the same...

Keras Valueerror: This model has never been called

This should be very simple Keras program. It works until the last line of code. But I have called the predict method. Granted I used the same input data as the training data, but that should not matter.
from keras.models import Sequential
import pandas as pd
url = 'https://raw.githubusercontent.com/werowe/logisticRegressionBestModel/master/KidCreative.csv'
data = pd.read_csv(url, delimiter=',')
labels=data['Buy']
features = data.iloc[:,2:16]
model = Sequential()
model.compile(optimizer='rmsprop' ,loss='binary_crossentropy',metrics=['accuracy'])
model.fit(data, labels, epochs=10, batch_size=32)
model.evaluate(labels, features, batch_size=128)
model.predict(labels)
model.summary()
But I get this error:
model.summary()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/walker/tf3/lib/python3.4/site-packages/keras/engine/network.py", line 1263, in summary
'This model has never been called, this its weights '
ValueError: This model has never been called, this its weights have not yet been created, so no summary can be displayed. Build the model first (e.g. by calling it on some test data).
The first step should be to actually build a model, which you don't do; i.e. after model = Sequential(), there should be some model.add statements, in order to build the model, before compiling it, fitting it, and use it for evaluation or getting its summary.
Instead of getting guidance from some repo of ambiguous quality, it's better to start from the official examples & tutorials - look for example at the Keras MNIST CNN example; replicating here only the model-related parts, we get:
model = Sequential()
# no model built, as in your case
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
This should already give an error:
TypeError: Sequential model cannot be built: model is empty. Add some layers first.
which I am surprised you don't report in your case.
Here is what we should do instead (again, refer to the link for the full details):
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
model.summary()
# works OK
As #desertnaut said, it is necessary to have layers in a model before calling mode.summary().
Edit: I want to add that, Even if your model has layers model.summary() may not work if it doesn't have "input_shape" in the first layer.
Because in Sequential model dimensions of input of each layer (thus the number of parameters) are calculated using output dimension of the preceding layer but to start this chain input dimension for the first layer is needed. So it is not possible to generate a summary of the model without the input dimension.
Note: If you don't explicitly specify input dimension than keras will figure it out when you call fit() for the first time. So if you don't specify input_shape, then you need to call model.fit() before the model.summary()
Thanks Matias Valdenegro, I added some input layers and changed the loss function. Now this works:
import tensorflow as tf
from keras.models import Sequential
import pandas as pd
from keras.layers import Dense
url = 'https://raw.githubusercontent.com/werowe/logisticRegressionBestModel/master/KidCreative.csv'
data = pd.read_csv(url, delimiter=',')
labels=data['Buy']
features = data.iloc[:,2:16]
model = Sequential()
model.add(Dense(units=64, activation='relu', input_dim=1))
model.add(Dense(units=14, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
model.fit(labels, features,
batch_size=12,
epochs=10,
verbose=1,
validation_data=(labels, features))
model.evaluate(labels, features, verbose=0)
model.summary()

concatenate flatten output with and other datasets keras python

have 2 datasets, for the first data set i want to apply convolution and keep the result of flatten layyer then concatenate it with an other data set and a do a simple feed forward it is possible with keras ?
def build_model(x_train,y_train):
np.random.seed(7)
left = Sequential()
left.add(Conv1D(nb_filter= 6, filter_length=3, input_shape= (48,1),activation = 'relu', kernel_initializer='glorot_uniform'))
left.add(Conv1D(nb_filter= 6, filter_length=3, activation= 'relu'))
#model.add(MaxPooling1D())
print model
#model.add(Dropout(0.2))
# flatten layer
#https://www.quora.com/What-is-the-meaning-of-flattening-step-in-a-convolutional-neural-network
left.add(Flatten())
left.add(Reshape((48,1)))
right = Sequential()
#model.add(Reshape((48,1)))
# Compile model
model.add(Merge([left, right], mode='sum'))
model.add(Dense(10, 10))
epochs = 100
lrate = 0.01
decay = lrate/epochs
sgd = SGD(lr=lrate, momentum=0.9, decay=decay, nesterov=False)
#clipvalue=0.5)
model.compile(loss='mean_squared_error', optimizer='Adam')
model.fit(x_train,y_train, nb_epoch =epochs, batch_size=10, verbose=1)
#model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'] , )
return model
You need to look at the functional API. The sequential model you are using is not designed to take multiple network inputs.
Follow the "Multi-input and multi-output models" example and you will have it working in no time!

Categories

Resources