Use both losses on a subnetwork of combined networks - python

I am trying to stack two networks together. I want to calculate loss of each network separately. For example in the image below; loss of LSTM1 should be (Loss1 + Loss2) and loss of system should be just (Loss2)
I implemented a network like below with the idea above but have no idea how to compile and run it.
def build_lstm1():
x = Input(shape=(self.timesteps, self.input_dim,), name = 'input')
h = LSTM(1024, return_sequences=True))(x)
scores = TimeDistributed(Dense(self.input_dim, activation='sigmoid', name='dense'))(h)
LSTM1 = Model(x, scores)
return LSTM1
def build_lstm2():
x = Input(shape=(self.timesteps, self.input_dim,), name = 'input')
h = LSTM(1024, return_sequences=True))(x)
labels = TimeDistributed(Dense(self.input_dim, activation='sigmoid', name='dense'))(h)
LSTM2 = Model(x, labels)
return LSTM2
lstm1 = build_lstm1()
lstm2 = build_lstm2()
combined = Model(inputs = lstm1.input ,
outputs = [lstm1.output,
lstm2(lstm1.output).output)])

This is a wrong way of using the Model functional API of Keras. Also it's not possible to have loss of LSTM1 as Loss1+ Loss2. It will be only Loss1. Similarly for LSTM2 it will be only Loss2. However, for the combined network you can have any linear combination of Loss1 and Loss2 as the overall Loss i.e.
Loss_overall = a.Loss1 + b.Loss2. where a,b are non-negative real numbers
The real essence of Model Functional API is that it allows you create Deep learning architecture with multiple outputs and multiple inputs in single model.
def build_lstm_combined():
x = Input(shape=(self.timesteps, self.input_dim,), name = 'input')
h_1 = LSTM(1024, return_sequences=True))(x)
scores = TimeDistributed(Dense(self.input_dim, activation='sigmoid', name='dense'))(h_1)
h_2 = LSTM(1024, return_sequences=True))(h_1)
labels = TimeDistributed(Dense(self.input_dim, activation='sigmoid', name='dense'))(h_2)
LSTM_combined = Model(x,[scores,labels])
return LSTM_combined
This combined model has loss which is combination of both Loss1 and Loss2. While compiling the model you can specify the weights of each loss to get overall loss. If your desired loss is 0.5Loss1 + Loss2 you can do this by:
model_1 = build_lstm_combined()
model_1.compile(optimizer=Adam(0.001), loss = ['categorical_crossentropy','categorical_crossentropy'],loss_weights= [0.5,1])

Related

tensorflow use input in loss function

i am using tensorflow/keras and i would like to use the input in the loss function
as per this answer here
Custom loss function in Keras based on the input data
I have created my loss function thusly
def custom_Loss_with_input(inp_1):
def loss(y_true, y_pred):
b = K.mean(inp_1)
return y_true - b
return loss
and set up the model with the layers and all ending like this
model = Model(inp_1, x)
model.compile(loss=custom_Loss_with_input(inp_1), optimizer= Ada)
return model
Nevertheless, i get the following error:
TypeError: Cannot convert a symbolic Keras input/output to a numpy array. This error may indicate that you're trying to pass a symbolic value to a NumPy call, which is not supported. Or, you may be trying to pass Keras symbolic inputs/outputs to a TF API that does not register dispatching, preventing Keras from automatically converting the API call to a lambda layer in the Functional Model.
Any advice on how to eliminate this error?
Thanks in advance
You can use add_loss to pass external layers to your loss, in your case the input tensor.
Here an example:
def CustomLoss(y_true, y_pred, input_tensor):
b = K.mean(input_tensor)
return K.mean(K.square(y_true - y_pred)) + b
X = np.random.uniform(0,1, (1000,10))
y = np.random.uniform(0,1, (1000,1))
inp = Input(shape=(10,))
hidden = Dense(32, activation='relu')(inp)
out = Dense(1)(hidden)
target = Input((1,))
model = Model([inp,target], out)
model.add_loss( CustomLoss( target, out, inp ) )
model.compile(loss=None, optimizer='adam')
model.fit(x=[X,y], y=None, epochs=3)
If your loss is composed of different parts and you want to track them you can add different losses corresponding to the loss parts. In this way, the losses are printed at the end of each epoch and are stored in model.history.history. Remember that the final loss minimized during training is the sum of the various loss parts.
def ALoss(y_true, y_pred):
return K.mean(K.square(y_true - y_pred))
def BLoss(input_tensor):
b = K.mean(input_tensor)
return b
X = np.random.uniform(0,1, (1000,10))
y = np.random.uniform(0,1, (1000,1))
inp = Input(shape=(10,))
hidden = Dense(32, activation='relu')(inp)
out = Dense(1)(hidden)
target = Input((1,))
model = Model([inp,target], out)
model.add_loss(ALoss( target, out ))
model.add_metric(ALoss( target, out ), name='a_loss')
model.add_loss(BLoss( inp ))
model.add_metric(BLoss( inp ), name='b_loss')
model.compile(loss=None, optimizer='adam')
model.fit(x=[X,y], y=None, epochs=3)
To use the model in inference mode (removing the target from inputs):
final_model = Model(model.input[0], model.output)
final_model.predict(X)

Backpropagation in Keras model affects not all Layers?

i have the following Model
inputs = Input(shape=(8)) # 8 groessen als eingabe
x = Reshape((8, 1))(inputs)
# generator
x = Bidirectional(LSTM(32, return_sequences=True))(x)
x = Bidirectional(LSTM(64, return_sequences=True))(x)
generated = LSTM(4, return_sequences=True, activation="sigmoid")(x)
# rating
x = Bidirectional(LSTM(128, return_sequences=True))(generated)
x = Bidirectional(LSTM(128))(x)
x = Flatten()(x)
x = Dense(16, activation="relu")(x)
rating = Dense(8, activation="relu")(x)
model = Model(inputs=inputs, outputs=[rating, generated])
return model
i feed a sequence (8,) to the model, a generator should create a new sequence (8,4) out of this sequence that fulfill some conditions. There are a lot of outputs that could fulfil that condition, but my generator should just take one that it likes.
then i feed the generated sequence to the next layer, where i calculate the value (8,) of this output in order to be able to have gradients, when i apply my loss function
rating, generated = model(model_input, training=True)
calculated_rating = my_numeric_function(np.array(generated))
loss_1 = mse(model_input, rating) # the rating-loss
loss_2 = mse(calculated_rating , rating) # the loss between rating and generator
metric = mse(model_input, calculated_rating) # metric: difference between model input and real calculated
tape.gradient(loss_1+loss_2, model.trainable_weights)
my loss1 and loss2 is decreasing, but the metric (calculated_rating - generated) stays exactly the same.
it seems that in backpropagation my loss function does not allow to change the weights of the generator layers.
what may be the reason that the metrics does not decrease ? (it stays around 51-52)

Unable to completely separate outputs of model in TensorFlow

I am trying to create a convolutional neural network that has two regression outputs, a score and a confidence. I have frozen the layers they have in common in the hopes that the addition of the confidence output doesn't change the score, but in my experiments it has. For the model with just the score, I used Xception and added a simple GlobalAveragePooling2D and Dense(512) layer then output a single number.
base_model = Xception(input_shape=(224, 224, 3), weights='imagenet', include_top=False)
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(512, activation='relu')(x)
predictions = Dense(1, activation='sigmoid')(x)
model = Model(inputs=base_model.input, outputs=predictions)
for layer in base_model.layers:
layer.trainable = False
optimizer = Adam(learning_rate=learning_rate)
model.compile(loss='mae', optimizer=optimizer, metrics=['mse','mae'], run_eagerly=True)
Here is what the end of model.summary() looks like:
When I fit it, the model produces good results.
But when I try to add a second output the result of the first becomes much worse. The new model gets trained off tuples where is first number is the same as the first model and the second number is a confidence value. The model is very similar to the one above.
base_model = Xception(input_shape=(224, 224, 3), weights='imagenet', include_top=False)
x = base_model.output
x = GlobalAveragePooling2D()(x)
score_x = Dense(512, activation='relu')(x)
score_out = Dense(1, activation='sigmoid', name='score_model')(score_x)
confidence_x = Dense(512, activation='relu')(x)
confidence_out = Dense(1, name='confidence_model')(confidence_x)
model = Model(inputs=base_model.input, outputs=[score_out, confidence_out])
for layer in base_model.layers:
layer.trainable = False
losses = {'score_model': 'mae', 'confidence_model': 'mae'}
loss_weights = {'score_model': 1, 'confidence_model': 1}
model.compile(loss=losses, loss_weights=loss_weights, optimizer=optimizer, metrics=['mse','mae'], run_eagerly=True)
When I look at model.summary(), it has twice as many trainable parameters as the previous model, which is exactly what I was expecting. Everything looks right to me so far.
But when I train this model the performance on the score is much worse. I was thinking it would be the same (within stochastic variation). After the first epoch, the loss from the first model is around 0.125. The score_model_loss from the second model is around 0.554. Clearly I'm not completely separating the models. What am I missing?
Note: This answer will work well only because the layer that do the feature extraction are frozen. As #Akshay Sehgal stated in the comments :
optimizing for 2 goals together is actually a completely different problem than optimizing 2 independent goals separately
In that case, we are optimizing for 2 goals separately.
The easiest solution is probably to write a custom training loop with 2 tf.GradientTape, one for each goal. Lets consider this really simple example:
Dummy data
Let's create some random Data
import tensorflow as tf
X = tf.random.normal((1000,1))
y1= 3*X + 1
y2 = -2*X +2
ds = tf.data.Dataset.from_tensor_slices((X,y1,y2)).batch(10)
Creating a model with 2 outputs
In that example, I skip the feature extraction step, as a simple linear regression will work for the data. But as your feature extractor network is frozen, the example is similar.
inp = tf.keras.Input((1,))
dense_1 = tf.keras.layers.Dense(1, name="objective1")(inp)
dense_2 = tf.keras.layers.Dense(1, name="objective2")(inp)
model = tf.keras.Model(inputs=inp, outputs=[dense_1, dense_2])
# setting up the loss functions as well as the optimizer
opt = tf.optimizers.SGD()
loss_func1 = tf.losses.mean_squared_error
loss_func2 = tf.losses.mean_absolute_error
Note the name given to the two dense layers: I will use them later to retrieve the appropriate weights.
Getting the weights to optimize
We can use the name set before to retrieve the variable belonging to each objective :
var1, var2 = [],[]
for l in model.layers:
if "objective1" in l.name:
var1 += l.trainable_variables
if "objective2" in l.name:
var2 += l.trainable_variables
The training loop
You simply need to tapes, one for each objective. You can use different optimizer as well, if it makes the training better.
counter = 0
for x, y1, y2 in ds:
counter += 1
with tf.GradientTape() as tape1, tf.GradientTape() as tape2:
pred1, pred2 = model(x)
loss1 = loss_func1(y1, pred1)
loss2 = loss_func2(y2, pred2)
grad1 = tape1.gradient(loss1, var1)
grad2 = tape2.gradient(loss2, var2)
opt.apply_gradients(zip(grad1, var1))
opt.apply_gradients(zip(grad2, var2))
if counter % 10:
print(f"Step : {counter}, objective1: {tf.reduce_mean(loss1)}, objective2: {tf.reduce_mean(loss2)}")
If we run the training, we get:
Step : 1, objective1: 4.609124183654785, objective2: 2.6634981632232666
[...]
Step : 99, objective1: 7.176481902227555e-14, objective2: 0.030187154188752174
The principle advantage training that way is that you just need to extract the features once for the two objectives.

Character-based Text Classification with Triplet Loss

Im trying to implement a text-classifier using triplet loss to classify different job descriptions into categories based on this paper. But whatever i do, the classifier yields very bad results.
For the embedding i followed this tutorial and the NN architecture is based on this article.
I create my encodings using:
max_char_len = 20
group_numbers = range(0, len(job_groups))
char_vocabulary = {'PAD':0}
X_char = []
y_temp = []
i = 1
for group, number in zip(job_groups, group_numbers):
for job in group:
job_cleaned = some_cleaning_function(job)
job_enc = []
for c in job_cleaned:
if c in char_vocabulary.keys():
job_enc.append(char_vocabulary[c])
else:
char_vocabulary[c] = i
job_enc.append(char_vocabulary[c])
i+=1
X_char.append(job_enc)
y_temp.append(number)
X_char = pad_sequences(X_char, maxlen = max_char_length, truncating='post')
My Neural Network is set up the following way:
def create_base_model():
char_in = Input(shape=(max_char_length,), name='Char_Input')
char_enc = Embedding(input_dim=len(char_vocabulary)+1, output_dim=20, mask_zero=True,name='Char_Embedding')(char_in)
x = Bidirectional(LSTM(64, return_sequences=True, recurrent_dropout=0.2, dropout=0.4))(char_enc)
x = Bidirectional(LSTM(64, return_sequences=True, recurrent_dropout=0.2, dropout=0.4))(x)
x = Bidirectional(LSTM(64, return_sequences=True, recurrent_dropout=0.2, dropout=0.4))(x)
x = Bidirectional(LSTM(64, return_sequences=False, recurrent_dropout=0.2, dropout=0.4))(x)
out = Dense(128, activation = "softmax")(x)
return Model(char_in, out)
def get_siamese_triplet_char():
anchor_input_c = Input(shape=(max_char_length,),name='Char_Input_Anchor')
pos_input_c = Input(shape=(max_char_length,),name='Char_Input_Positive')
neg_input_c = Input(shape=(max_char_length,),name='Char_Input_Negative')
base_model = create_base_model(encoding_generator)
encoded_anchor = base_model(anchor_input_c)
encoded_positive = base_model(pos_input_c)
encoded_negative = base_model(neg_input_c)
inputs = [anchor_input_c, pos_input_c, neg_input_c]
outputs = [encoded_anchor, encoded_positive, encoded_negative]
siamese_triplet = Model(inputs, outputs)
siamese_triplet.add_loss((triplet_loss(outputs)))
siamese_triplet.compile(loss=None, optimizer='adam')
return siamese_triplet, base_model
The triplet loss is defined as follows:
def triplet_loss(inputs):
anchor, positive, negative = inputs
positive_distance = K.square(anchor - positive)
negative_distance = K.square(anchor - negative)
positive_distance = K.sqrt(K.sum(positive_distance, axis=-1, keepdims = True))
negative_distance = K.sqrt(K.sum(negative_distance, axis=-1, keepdims = True))
loss = positive_distance - negative_distance
loss = K.maximum(0.0, 1 + loss)
return K.mean(loss)
The model is then trained with:
siamese_triplet_char.fit(x=
[Anchor_chars_train,
Positive_chars_train,
Negative_chars_train],
shuffle=True, batch_size=8, epochs=22, verbose=1)
My goal is to: First, train the network with no label data in order to minimize the space of the different phrases and second, add a classification layer and create the final classifier.
My general problem is that even the first phase shows sinking cost-values it overfits and the validation results jump around and the second phase fails badly as I'm not able to train the model to actually classify.
My questions are the following:
Could someone explain the Embedding Architecture? What is the output dimension refering to? The individual characters? Would that even make sense? Or is there a better way to encode the input data?
How can i add validation_data to a network that does not contain labeled data? I could use validation_split, but i would rather prefer passing specific data to validate as my data is stratified.
Is there a reason why the classification does not work? Applying a simple K-Nearest Neighbor algorithm achieves at best 0.5 accuracy! Is it because of the data? Or is there a systematic error in my system?
All ideas and suggestions are really appreciated!

Getting extremely low loss in a bidirectional RNN?

I have implemented a bi-directional RNN in TensorFlow using a BasicLSTMCell and rnn.bidirectional_rnn. I am calculating the loss using seq2seq.sequence_loss_by_example after concatenating the outputs I receive. My application is a next character predictor.
I getting an extremely low cost, (~50 times lesser than the unidirectional RNN). I suspect I've made a mistake in the seq2seq.sequence_loss_by_example step.
Here is my model -
# Model begins
cell_fn = rnn_cell.BasicLSTMCell
cell = fw_cell = cell_fn(args.rnn_size, state_is_tuple=True)
cell2 = bw_cell = cell_fn(args.rnn_size, state_is_tuple=True)
input_data = tf.placeholder(tf.int32, [args.batch_size, args.seq_length])
targets = tf.placeholder(tf.int32, [args.batch_size, args.seq_length])
initial_state = fw_cell.zero_state(args.batch_size, tf.float32)
initial_state2 = bw_cell.zero_state(args.batch_size, tf.float32)
with tf.variable_scope('rnnlm'):
softmax_w = tf.get_variable("softmax_w", [2*args.rnn_size, args.vocab_size])
softmax_b = tf.get_variable("softmax_b", [args.vocab_size])
with tf.device("/cpu:0"):
embedding = tf.get_variable("embedding", [args.vocab_size, args.rnn_size])
input_embeddings = tf.nn.embedding_lookup(embedding, input_data)
inputs = tf.unpack(input_embeddings, axis=1)
outputs, last_state, last_state2 = rnn.bidirectional_rnn(fw_cell,
bw_cell,
inputs,
initial_state_fw=initial_state,
initial_state_bw=initial_state2,
dtype=tf.float32)
output = tf.reshape(tf.concat(1, outputs), [-1, 2*args.rnn_size])
logits = tf.matmul(output, softmax_w) + softmax_b
probs = tf.nn.softmax(logits)
loss = seq2seq.sequence_loss_by_example([logits],
[tf.reshape(targets, [-1])],
[tf.ones([args.batch_size * args.seq_length])],
args.vocab_size)
cost = tf.reduce_sum(loss) / args.batch_size / args.seq_length
lr = tf.Variable(0.0, trainable=False)
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars),
args.grad_clip)
optimizer = tf.train.AdamOptimizer(lr)
train_op = optimizer.apply_gradients(zip(grads, tvars))
I think there is no any mistake in your code.
The problem is the objective function with the Bi-RNN model in your application (next character predictor).
The unidirectional RNN (such as ptb_word_lm or char-rnn-tensorflow), it is really a model used for the prediction, for example, if raw_text is 1,3,5,2,4,8,9,0, then, your inputs and target will be:
inputs: 1,3,5,2,4,8,9
target: 3,5,2,4,8,9,0
and the prediction is (1)->3, (1,3)->5, ..., (1,3,5,2,4,8,9)->0
But in Bi-RNN, the first prediction is really not just (1)->3, because the output[0] in your code contians the reverse information of the raw_text by use bw_cell (also not (1,3)->5, ..., (1,3,5,2,4,8,9)->0). A similar example is: I tell you that flower is a rose, and than I let you to predict what the flower is? I think you can give me the right answer very easy, and this is also the reason why you getting an extremely low loss in your Bi-RNN model for the application.
In fact, I think Bi-RNN (or Bi-LSTM) is not an appropriate model for the application of next character predictor. Bi-RNN need the full sequence when it works, you will find you can't use this model easily when you want to predict the next character.

Categories

Resources