Which Keras output layer / loss function to use in the following scenario - python

I got the following data sample:
[1,2,1,4,5],[1,2,1,4,5],[0,2,7,0,1] with a label of [1,0,1]
....
[1,9,1,4,5],[1,5,1,4,5],[0,7,7,0,1] with a label of [0,1,1]
I can't train it on a single series of [1,2,1,4,5] with a label of 1 or 0, as the whole row got a meaningful context information to it, so the whole 15 input digits should be inferred together.
It's not your typical classification, and it doesn't seem as a regression issue either. Also, the data is not related to imagery, it's taken from a scientific domain.
Obviously I am feeding the data as a flat 15 input node to the net
model = Sequential(
[
Dense(units=16,input_shape = scaled_train_samples[0].shape,activation='relu'),
Dense(units=32,activation='relu'),
Dense(units=3,activation='???'),
])
Which activation output function would be ideal in such case?

I would recommend having 3 outputs to the network. Since the data can affect the 3 "sub-labels", the network only branches apart on the classification layer. If you want, you can add more layers to each specific branch.
I'm assuming that each "sub-label" is binary classification, so that's why I chose sigmoid (returns value from 0 to 1, so larger number means network thinks it's class 1 over class 0)
To do this, you would have to change to the Functional API like this:
from keras.layers import Input, Dense
from keras.models import Model
visible = Input(shape=(scaled_train_samples[0].shape))
model = Dense(16, input_shape = activation='relu')(visible)
model = Dense(32,activation='relu')(model)
model = Dense(16,activation='relu')(model)
out1 = Dense(units=1,activation='sigmoid',name='OUT1')(model)
out2 = Dense(units=1,activation='sigmoid',name='OUT2')(model)
out3 = Dense(units=1,activation='sigmoid',name='OUT3')(model)
finalModel = Model(inputs=visible outputs=[out1, out2, out3])
optimizer = Adam(learning_rate=.0001)
losses = {
'OUT1': 'binary_crossentropy',
'OUT2': 'binary_crossentropy',
'OUT3': 'binary_crossentropy',
}
model.compile(optimizer=optimizer, loss=losses, metrics={'OUT1':'accuracy', 'OUT2':'accuracy', 'OUT3':'accuracy'})

Related

How to to train multi regression output with tf.data.Dataset?

Only the first output parameter is learned to be properly estimated during training of a multi regression output net. Second and subsequent parameters only seem to follow first parameter. It seems, that ground truth for second output parameter is not used during training. How do I shape tf.data.Dataset and input it into model.fit() function so second output parameter is trained?
import tensorflow as tf
import pandas as pd
from tensorflow import keras
from keras import layers
#create dataset from csv
file = pd.read_csv( 'minimalDataset.csv', skipinitialspace = True)
input = file["input"].values
output1 = file["output1"].values
output2 = file["output2"].values
dataset = tf.data.Dataset.from_tensor_slices((input, (output1, output2))).batch(4)
#create multi output regression net
input_layer = keras.Input(shape=(1,))
x = layers.Dense(20, activation="relu")(input_layer)
x = layers.Dense(60, activation="relu")(x)
output_layer = layers.Dense(2)(x)
model = keras.Model(input_layer, output_layer)
model.compile(optimizer="adam", loss="mean_squared_error")
#train model and make prediction (deliberately overfitting to illustrate problem)
model.fit(dataset, epochs=500)
prediction = model.predict(dataset)
minimalDataset.csv and predictions:
input
output1
output2
prediction_output1
prediction_output2
0
-1
1
-0.989956
-0.989964
1
2
0
1.834444
1.845085
2
0
2
0.640249
0.596099
3
1
-1
0.621426
0.646796
If I create two independent dense final layers the second parameter is learned accurately but I get two losses:
output_layer = (layers.Dense(1)(x), layers.Dense(1)(x))
Note: I want to use tf.data.Dataset because I build a 20k image/csv with it and do per-element transformations as preprocessing.
tf.data.Dataset.from_tensor_slices() slices along the first dimension. Because of this the input and output tensors need to be transposed:
dataset = tf.data.Dataset.from_tensor_slices((tf.transpose(input), (tf.transpose([output1, output2])))).batch(4)

Multi-Multi-Class Classification in Tensorflow/Keras

I already posted this question on CrossValidated, but thought the StackOverflow community, being bigger, might be able to answer this question faster.
I'd like to build a model that can output results for several multi-class classification problems at once. Suppose you have diagnostic data about a product that needs to be repaired and you want to predict the quantity of various part numbers that will be needed to repair the product. The input data is the same for all part numbers to be predicted.
Here's a concrete example. You have 2 part numbers that can get replaced, part A and part B. For part A you can replace 0, 1, 2, or 3 of them on the product. For part B you can replace 0, 2 or 4 (replaced in pairs). How can a Tensorflow/Keras Neural Network be configured to have outputs such that the probabilities of replacing part A 0, 1, 2, and 3 times sum to 1. With similar behavior for part B (probabilities sum to 1).
Simple code like the code below would treat all of the values as coming from the same discrete probability distribution. How can this be modified to create 2 discrete probability distributions in the output:
def baseline_model():
# create model
model = Sequential()
model.add(Dense(8, input_dim=4, activation='relu'))
model.add(Dense(7, activation='softmax'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
UPDATE
Based on the comment(s), will something like this work?
References this question
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense, Flatten, Concatenate
from mypackage import get_my_data, compiler_args
data = get_my_data() # obviously, this is a stand-in for however you get your data.
input_layer = Input(data.shape[1:])
hidden = Flatten()(input_layer)
hidden = Dense(192, activation='relu')(hidden)
main_output = Dense(192, activation='relu')(hidden)
# I'm going to build each individual parallel set of layers separately
part_a = Dense(10, activation='relu')(main_output)
output_a = Dense(4, activation='softmax')(part_a) # multi-class classification for part A
part_b = Dense(10, activation='relu')(main_output) # note that it is main_output again
output_b = Dense(3, activation='softmax')(part_b) # multi-class classification for part B
final_output = Concatenate()([output_a, output_b]) # Combine the outputs into final output layer
model = tf.keras.Model(input_layer, final_output)
model.compile(**compiler_args)
model.summary()

Bad results in Autoencoder when using non-linear activation functions in combination with softmax

I am training an autoencoder on data where each observation is a p=[p_1, p_2,...,p_n] where 0<p_i<1 for all i. Furthermore, each input p can be partitioned in parts where the sum of each part equals 1. This is because the elements represent parameters of a categorical distribution and p contains parameters for multiple categorical distributions.
As an example, the data that I have comes from a probabilistic database that may look like this:
In order to enforce this constraint on my output, I use multiple softmax activations in the functional model API from Keras. In fact, what I am doing is similar to multi-label classification. This may look as follows:
The implementation is as follows:
encoding_dim = 3
numb_cat = len(structure)
inputs = Input(shape=(train_corr.shape[1],))
encoded = Dense(6, activation='linear')(inputs)
encoded = Dense(encoding_dim, activation='linear')(encoded)
decoded = Dense(6, activation='linear')(encoded)
decodes = [Dense(e, activation='softmax')(decoded) for e in structure]
losses = [jsd for j in range(numb_cat)] # JSD loss function
autoencoder = Model(inputs, decodes)
sgd = optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
autoencoder.compile(optimizer=sgd, loss=losses, loss_weights=[1 for k in range(numb_cat)])
train_attr_corr = [train_corr[:, i:j] for i, j in zip(np.cumsum(structure_0[:-1]), np.cumsum(structure_0[1:]))]
test_attr_corr = [test_corr[:, i:j] for i, j in zip(np.cumsum(structure_0[:-1]), np.cumsum(structure_0[1:]))]
history = autoencoder.fit(train_corr, train_attr_corr, epochs=100, batch_size=2,
shuffle=True,
verbose=1, validation_data=(test_corr, test_attr_corr))
where structure is a list containing the number of categories that each attributes has, thus governing which nodes in the output layer are grouped and go to the same softmax layer. In the example above, structure = [2,2]. Furthermore, loss jsd is a symmetric version of the KL-divergence.
Question:
When using linear activation functions, the results are pretty good. However, when I try to use non-linear activation functions (relu or sigmoid), the results are much worse. What might be the reason for this?

keras model.evaluate() does not show loss

I've created a neural network of the following form in keras:
from keras.layers import Dense, Activation, Input
from keras import Model
input_dim_v = 3
hidden_dims=[100, 100, 100]
inputs = Input(shape=(input_dim_v,))
net = inputs
for h_dim in hidden_dims:
net = Dense(h_dim)(net)
net = Activation("elu")(net)
outputs = Dense(self.output_dim_v)(net)
model_v = Model(inputs=inputs, outputs=outputs)
model_v.compile(optimizer='adam', loss='mean_squared_error', metrics=['mse'])
Later, I train it on single examples using model_v.train_on_batch(X[i],y[i]).
To test, whether the neural network is becoming a better function approximator, I wanted to evaluate the model on the accumulated X and y (in my case, X and y grow over time) periodically. However, when I call model_v.evaluate(X, y), only the characteristic progress bars appear in the console, but neither the loss value nor the mse-metric (which are the same in this case) are printed.
How can I change that?
The loss and metric values are not shown in the progress bar of evaluate() method. Instead, they are returned as the output of the evaluate() method and therefore you can print them:
for i in n_iter:
# ... get the i-th batch or sample
# ... train the model using the `train_on_batch` method
# evaluate the model on whole or part of test data
loss_metric = model.evaluate(test_data, test_labels)
print(loss_metric)
According to the documentation, if your model has multiple outputs and/or metrics, you can use model.metric_names attribute to find out what the values in loss_metric correspond to.

Keras - Making two predictions from one neural network

I'm trying to combine two outputs that are produced by the same network that makes predictions on a 4 class task and a 10 class task. Then I look to combine these outputs to give a length 14 array which I use as my end target.
While this seems to work actively the predictions are always for one class so it produces a probability dist which is only concerned with selecting 1 out of the 14 options instead of 2. What I actually need it to do is to provide 2 predictions, one for each class. I want this all to be produced by the same model.
input = Input(shape=(100, 100), name='input')
lstm = LSTM(128, input_shape=(100, 100)))(input)
output1 = Dense(len(4), activation='softmax', name='output1')(lstm)
output2 = Dense(len(10), activation='softmax', name='output2')(lstm)
output3 = concatenate([output1, output2])
model = Model(inputs=[input], outputs=[output3])
My issue here is determining an appropriate loss function and method of prediction? For prediction I can simply grab the output of each layer after the softmax however I'm unsure how to set the loss function for each of these things to be trained.
Any ideas?
Thanks a lot
You don't need to concatenate the outputs, your model can have two outputs:
input = Input(shape=(100, 100), name='input')
lstm = LSTM(128, input_shape=(100, 100)))(input)
output1 = Dense(len(4), activation='softmax', name='output1')(lstm)
output2 = Dense(len(10), activation='softmax', name='output2')(lstm)
model = Model(inputs=[input], outputs=[output1, output2])
Then to train this model, you typically use two losses that are weighted to produce a single loss:
model.compile(optimizer='sgd', loss=['categorical_crossentropy',
'categorical_crossentropy'], loss_weights=[0.2, 0.8])
Just make sure to format your data right, as now each input sample corresponds to two output labeled samples. For more information check the Functional API Guide.

Categories

Resources