Keras - Making two predictions from one neural network - python

I'm trying to combine two outputs that are produced by the same network that makes predictions on a 4 class task and a 10 class task. Then I look to combine these outputs to give a length 14 array which I use as my end target.
While this seems to work actively the predictions are always for one class so it produces a probability dist which is only concerned with selecting 1 out of the 14 options instead of 2. What I actually need it to do is to provide 2 predictions, one for each class. I want this all to be produced by the same model.
input = Input(shape=(100, 100), name='input')
lstm = LSTM(128, input_shape=(100, 100)))(input)
output1 = Dense(len(4), activation='softmax', name='output1')(lstm)
output2 = Dense(len(10), activation='softmax', name='output2')(lstm)
output3 = concatenate([output1, output2])
model = Model(inputs=[input], outputs=[output3])
My issue here is determining an appropriate loss function and method of prediction? For prediction I can simply grab the output of each layer after the softmax however I'm unsure how to set the loss function for each of these things to be trained.
Any ideas?
Thanks a lot

You don't need to concatenate the outputs, your model can have two outputs:
input = Input(shape=(100, 100), name='input')
lstm = LSTM(128, input_shape=(100, 100)))(input)
output1 = Dense(len(4), activation='softmax', name='output1')(lstm)
output2 = Dense(len(10), activation='softmax', name='output2')(lstm)
model = Model(inputs=[input], outputs=[output1, output2])
Then to train this model, you typically use two losses that are weighted to produce a single loss:
model.compile(optimizer='sgd', loss=['categorical_crossentropy',
'categorical_crossentropy'], loss_weights=[0.2, 0.8])
Just make sure to format your data right, as now each input sample corresponds to two output labeled samples. For more information check the Functional API Guide.

Related

what is the difference between Sequential and Model([input],[output]) in TensorFlow?

It seems Sequential and Model([input],[output]) have the same results when I just build a model layer by layer.
However, when I use the following two models with the same input, they give me different results.By the way,the input shape is (None, 15, 2) ande the output shape is (None, 1, 2).
Sequential model:
model = tf.keras.Sequential(
[
tf.keras.layers.Conv1D(filters = 4, kernel_size =7, activation = "relu"),
tf.keras.layers.Conv1D(filters = 6, kernel_size = 11, activation = "relu"),
tf.keras.layers.LSTM(100, return_sequences=True,activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.LSTM(100,activation='relu'),
tf.keras.layers.Dense(2,activation='relu'),
tf.keras.layers.Reshape((1,2))
]
)
Model([input],[output]) model
input_layer = tf.keras.layers.Input(shape=(LOOK_BACK, 2))
conv = tf.keras.layers.Conv1D(filters=4, kernel_size=7, activation='relu')(input_layer)
conv = tf.keras.layers.Conv1D(filters=6, kernel_size=11, activation='relu')(conv)
lstm = tf.keras.layers.LSTM(100, return_sequences=True, activation='relu')(conv)
dropout = tf.keras.layers.Dropout(0.2)(lstm)
lstm = tf.keras.layers.LSTM(100, activation='relu')(dropout)
dense = tf.keras.layers.Dense(2, activation='relu')(lstm)
output_layer = tf.keras.layers.Reshape((1,2))(dense)
model = tf.keras.models.Model([input_layer], [output_layer])
the result of Sequential model:
mse: 21.679258038588586
rmse: 4.65609901511862
mae: 3.963341420395535
And the result of Model([input],[output]) model:
mse: 36.85855652774293
rmse: 6.071124815694612
mae: 4.4878270279889065
The Sequence version uses the Sequencial model while the Model([inputs], [outputs]) uses the Functional API.
The first is easier to use, but only works for single-input single-output feed forward models (in the sense of Keras layers).
The second is more complex but get rid of those constraints, allowing to create many more models.
So, your main point is right: any sequencial model can be re-written as a functional model. You can double check this by comparing the architectures with the usage of summary function and plotting the models.
However, this only shows that architectures are the same, but not the weights!
Assuming you are fitting both models with same data and same compile and fit params (by the way, include those in your question), there is lots of randomness in the training process which may lead to different results. So, try the following to compare them better:
remove as much randomness as possible by setting seeds, in your code and for each layer instantiation.
avoid using data augmentation if using it.
use the same validation/train split for both models: to be sure, you can split the dataset yourself.
do not use shuffling in data generators nor during the training.
Here you can read more about producing reproducible results in keras.
Even after following those tips, your results may not be deterministic, and hence not the same, so finally, and maybe more important: do not compare single run: train and eval each model several times (for instance, 20) and then compare the average MAE with it's standard deviation.
If after all this your results are still so different, please, update your question with them.

Predicting Multiple Outputs one after another in Tensorflow

I want to create a model which can predict two outputs. I did some research and I found that there's a way to do it by creating two branches (for predicting two outputs) using functional API in Tensorflow Keras but I have a another approach in my mind which looks like this :
i.e. given a input, first I want to predict output1 and then based on that I want to predict output2.
So how can this can be done in Tensorflow ?
Please let me know how the training will be done as well i.e. how I'll be to pass labels for each output1 and output2 and then calculate the loss as well.
Thank you
You can do it with functional API of tensorflow. I write it in some sort of pseudo code:
Inputs = your_input
x = hidden_layers()(Inputs)
Output1 = Dense()(x)
x = hidden_layers()(Output1)
Output2 = Dense()(x)
So you can separate it to two models if it is what you desired:
model1 = tf.keras.models.Model(inputs=[Input], outputs=[Output1])
model2 = tf.keras.models.Model(inputs=[Input], outputs=[Output2])
Or have everything in one model:
model = tf.keras.models.Model(inputs=[Input], outputs=[Output2])
Output1_pred = model.get_layer('Output1').output
UPDATE:
In order to training model with two outputs, you can separate model to two parts and train each part separately as follow:
model1 = tf.keras.models.Model(inputs=[Input], outputs=[Output1])
model2 = tf.keras.models.Model(inputs=[model1.get_layer('Output1').output], outputs=[Output2])
model1.cmpile(...)
model1.fit(...)
for layer in model1.layers:
layer.trainable = False
model2.compile(...)
model2.fit(...)
You can actually modify the great answer by #Mohammad to compose a unique model with two outputs.
Inputs = your_input
x = hidden_layers()(Inputs)
Output1 = Dense()(x)
x = hidden_layers()(Output1)
Output2 = Dense()(x)
model = tf.keras.models.Model(inputs=[Inputs], outputs=[Output1, Output2])
model.compile(loss=[loss_1, loss_2], loss_weights=[0.5, 0.5], optimizer=sgd, metrics=['accuracy'])
of course you can change weights, optimiser and metric according to your case.
Then the model has to be trained on data like (X, y1, y2) where (y1, y2) are output1 and output2 labels respectively.

Multi-Multi-Class Classification in Tensorflow/Keras

I already posted this question on CrossValidated, but thought the StackOverflow community, being bigger, might be able to answer this question faster.
I'd like to build a model that can output results for several multi-class classification problems at once. Suppose you have diagnostic data about a product that needs to be repaired and you want to predict the quantity of various part numbers that will be needed to repair the product. The input data is the same for all part numbers to be predicted.
Here's a concrete example. You have 2 part numbers that can get replaced, part A and part B. For part A you can replace 0, 1, 2, or 3 of them on the product. For part B you can replace 0, 2 or 4 (replaced in pairs). How can a Tensorflow/Keras Neural Network be configured to have outputs such that the probabilities of replacing part A 0, 1, 2, and 3 times sum to 1. With similar behavior for part B (probabilities sum to 1).
Simple code like the code below would treat all of the values as coming from the same discrete probability distribution. How can this be modified to create 2 discrete probability distributions in the output:
def baseline_model():
# create model
model = Sequential()
model.add(Dense(8, input_dim=4, activation='relu'))
model.add(Dense(7, activation='softmax'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
UPDATE
Based on the comment(s), will something like this work?
References this question
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense, Flatten, Concatenate
from mypackage import get_my_data, compiler_args
data = get_my_data() # obviously, this is a stand-in for however you get your data.
input_layer = Input(data.shape[1:])
hidden = Flatten()(input_layer)
hidden = Dense(192, activation='relu')(hidden)
main_output = Dense(192, activation='relu')(hidden)
# I'm going to build each individual parallel set of layers separately
part_a = Dense(10, activation='relu')(main_output)
output_a = Dense(4, activation='softmax')(part_a) # multi-class classification for part A
part_b = Dense(10, activation='relu')(main_output) # note that it is main_output again
output_b = Dense(3, activation='softmax')(part_b) # multi-class classification for part B
final_output = Concatenate()([output_a, output_b]) # Combine the outputs into final output layer
model = tf.keras.Model(input_layer, final_output)
model.compile(**compiler_args)
model.summary()

Which Keras output layer / loss function to use in the following scenario

I got the following data sample:
[1,2,1,4,5],[1,2,1,4,5],[0,2,7,0,1] with a label of [1,0,1]
....
[1,9,1,4,5],[1,5,1,4,5],[0,7,7,0,1] with a label of [0,1,1]
I can't train it on a single series of [1,2,1,4,5] with a label of 1 or 0, as the whole row got a meaningful context information to it, so the whole 15 input digits should be inferred together.
It's not your typical classification, and it doesn't seem as a regression issue either. Also, the data is not related to imagery, it's taken from a scientific domain.
Obviously I am feeding the data as a flat 15 input node to the net
model = Sequential(
[
Dense(units=16,input_shape = scaled_train_samples[0].shape,activation='relu'),
Dense(units=32,activation='relu'),
Dense(units=3,activation='???'),
])
Which activation output function would be ideal in such case?
I would recommend having 3 outputs to the network. Since the data can affect the 3 "sub-labels", the network only branches apart on the classification layer. If you want, you can add more layers to each specific branch.
I'm assuming that each "sub-label" is binary classification, so that's why I chose sigmoid (returns value from 0 to 1, so larger number means network thinks it's class 1 over class 0)
To do this, you would have to change to the Functional API like this:
from keras.layers import Input, Dense
from keras.models import Model
visible = Input(shape=(scaled_train_samples[0].shape))
model = Dense(16, input_shape = activation='relu')(visible)
model = Dense(32,activation='relu')(model)
model = Dense(16,activation='relu')(model)
out1 = Dense(units=1,activation='sigmoid',name='OUT1')(model)
out2 = Dense(units=1,activation='sigmoid',name='OUT2')(model)
out3 = Dense(units=1,activation='sigmoid',name='OUT3')(model)
finalModel = Model(inputs=visible outputs=[out1, out2, out3])
optimizer = Adam(learning_rate=.0001)
losses = {
'OUT1': 'binary_crossentropy',
'OUT2': 'binary_crossentropy',
'OUT3': 'binary_crossentropy',
}
model.compile(optimizer=optimizer, loss=losses, metrics={'OUT1':'accuracy', 'OUT2':'accuracy', 'OUT3':'accuracy'})

Training on sequences of sentences using Keras

I am working on a project where I have to use a combination of numeric and text data in a neural network to make predictions of a system's availability for the next hour. Instead of trying to use separate neural networks and doing something weird/unclear (to me) at the end to produce the desired output, I decided to use Keras' merge layer with two networks (one for numeric data, one for text). The idea is that I feed the model a sequence of performance metrics for the previous 6 hours in the shape of (batch_size, 6hrs, num_features). Alongside the input I am giving to the network that handles numeric data, I am giving the second network another sequence of the size (batch_size, max_alerts_per_sequence, max_sentence length).
Any sequence of numeric data within a time range can have a variable number of events (text data) associated with it. For the sake of simplicity, I only allow a maximum of 50 events to accompany a sequence of performance data. Each event is hash encoded by word and padded. I have tried using a flatten layer to reduce the input shape from (50, 30) to (1500) so that the model can train on every event in these "sequences" (to clarify: I pass the model 50 sentences with 30 encoded elements each for every sequence of performance data).
My question is: Due to the fact that I need the NN to look at all events for a given sequence of performance metrics, how can I make the NN for text based data train on sequences of sentences?
My Model:
#LSTM Module for performance metrics
input = Input(shape=(shape[1], shape[2]))
lstm1 = Bidirectional(LSTM(units=lstm_layer_count, activation='tanh', return_sequences=True, input_shape=shape))(input)
dropout1 = Dropout(rate=0.2)(lstm1)
lstm2 = Bidirectional(LSTM(units=lstm_layer_count, activation='tanh', return_sequences=False))(dropout1)
dropout2 = Dropout(rate=0.2)(lstm2)
#LSTM Module for text based data
tInput = Input(shape=(50, 30))
flatten = Flatten()(tInput)
embed = Embedding(input_dim=vocabsize + 1, output_dim= 50 * 30, input_length=30*50)(flatten)
magic = Bidirectional(LSTM(100))(embed)
tOut = Dense(1, activation='relu')(magic)
#Merge the layers
concat = Concatenate()([dropout2, tOut])
output = Dense(units=1, activation='sigmoid')(concat)
nn = keras.models.Model(inputs=[input, tInput], outputs = output)
opt = keras.optimizers.SGD(lr=0.1, momentum=0.8, nesterov=True, decay=0.001)
nn.compile(optimizer=opt, loss='mse', metrics=['accuracy', coeff_determination])
So as far as I understood you have a sequence of max 50 events, which you want to make predictions for. These events have text data attached, which can be treated as another sequence of word embeddings. Here is an article about a similar architecture.
I would propose a solution which involves LSTMs for the text part an 1D-convolution for the "real" sequence part. Every LSTM layer is concatenated with the numerical data. This involves 50 LSTM layers, which can be time consuming to train, even if you use shared weights. It would be also possible to use only convolution layers for the text part, which is faster, but does not model long term dependencies. (I have the experience, that these long term dependencies are often not that important in text mining).
Text -> LSTM or 1DConv -> concat with numerical data -> 1DConv -> Output
Here is some exmaple code, which shows how to do use shard weights
numeric_input = Input(shape=(x_numeric_train.values.shape[1],), name='numeric_input')
nlp_seq = Input(shape=(number_of_messages ,seq_length,), name='nlp_input'+str(i))
# shared layers
emb = TimeDistributed(Embedding(input_dim=num_features, output_dim=embedding_size,
input_length=seq_length, mask_zero=True,
input_shape=(seq_length, )))(nlp_seq)
x = TimeDistributed(Bidirectional(LSTM(32, dropout=0.3, recurrent_dropout=0.3, kernel_regularizer=regularizers.l2(0.01))))(emb)
c1 = Conv1D(filter_size, kernel1, padding='valid', activation='relu', strides=1, kernel_regularizer=regularizers.l2(kernel_reg))(x)
p1 = GlobalMaxPooling1D()(c1)
c2 = Conv1D(filter_size, kernel2, padding='valid', activation='relu', strides=1, kernel_regularizer=regularizers.l2(kernel_reg))(x)
p2 = GlobalMaxPooling1D()(c2)
c3 = Conv1D(filter_size, kernel3, padding='valid', activation='relu', strides=1, kernel_regularizer=regularizers.l2(kernel_reg))(x)
p3 = GlobalMaxPooling1D()(c3)
x = concatenate([p1, p2, p3, numeric_input])
x = Dense(1, activation='sigmoid')(x)
model = Model(inputs=[nlp_seq, meta_input] , outputs=[x])
model.compile('adam', 'binary_crossentropy', metrics=['accuracy'])
And training:
model.fit([x_train, x_numeric_train], y_train)
# where x_train is a a array of num_samples * num_messages * seq_length
A complex model like this needs a lot of data to converge. For less data a simpler solution could be implemented by aggregating the events to have only one sequence. For example the text data of all events can be treated as one single text (with a separator token), instead of multiple texts, while the numerical data can be summed up, averaged or even combined into a fixed length list. But this depends on your data.
As I am working on something similar, I will update these answer with code later on.

Categories

Resources