I am trying to do a multilabel classfication problem, which has an imabalnced dataset.
The total number of samples is 1130, out of the 1130 samples, the first class occur in 913 of them. The second class 215 times and the third one 423 times.
In the model architecture, I have 3 output nodes, and have applied sigmoid activation.
input_tensor = Input(shape=(256, 256, 3))
base_model = VGG16(input_tensor=input_tensor,weights='imagenet',pooling=None, include_top=False)
#base_model.summary()
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = tf.math.reduce_max(x,axis=0,keepdims=True)
x = Dense(512,activation='relu')(x)
output_1 = Dense(3, activation='sigmoid')(x)
sagittal_model_abn = Model(inputs=base_model.input, outputs=output_1)
for layer in base_model.layers:
layer.trainable = True
I am using binary cross entropy loss, which I calculate usng this function.
I am using weighted loss to deal with the imbalance.
if y_true[0]==1:
loss_abn = -1*K.log(y_pred[0][0])*cwb[0][1]
elif y_true[0]==0:
loss_abn = -1*K.log(1-y_pred[0][0])*cwb[0][0]
if y_true[1]==1:
loss_acl = -1*K.log(y_pred[0][1])*cwb[1][1]
elif y_true[1]==0:
loss_acl = -1*K.log(1-y_pred[0][1])*cwb[1][0]
if y_true[2]==1:
loss_men = -1*K.log(y_pred[0][2])*cwb[2][1]
elif y_true[2]==0:
loss_men = -1*K.log(1-y_pred[0][2])*cwb[2][0]
loss_value_ds = loss_abn + loss_acl + loss_men
cwb contains the class weights.
y_true is the ground truth labels having length 3.
y_pred is a numpy array with shape (1,3)
I weight the classes individually as occurrence and non-occurrence of the classes.
Like, if the label is 1, I count it as an occurrence and if it is 0, then it is a non-occurrence.
So, the first class's label 1 occurs 913 times out of 1130
So the class weight of label 1 for the first class is 1130/913 which is about 1.23 and the weight of the label 0 for the first class is 1130/(1130-913)
When I train the model, the accuracy oscillates (or stays almost same), and the loss decreases.
And I am getting predictions like this for every sample
[[0.51018655 0.5010625 0.50482965]]
The prediction values are in the range 0.49 - 0.51 in every iteration for all the classes
Tried changing the number of nodes in the FC layer but it still behaves the same way.
Can anyone help?
Does using tf.math,reduce_max cause the problem? Should using #tf.function to do the operation that I am doing using tf.math.reduce_max be beneficial?
NOTE:
I am weighting the labels 1 and 0 for each class separately.
cwb = {0: {0: 5.207373271889401, 1: 1.2376779846659365},
1: {0: 1.2255965292841648, 1: 5.4326923076923075},
2: {0: 1.5416098226466575, 1: 2.8463476070528966}}
EDIT:
The results when I train using model.fit().
Epoch 1/20
1130/1130 [==============================] - 1383s 1s/step - loss: 4.1638 - binary_accuracy: 0.4558 - val_loss: 5.0439 - val_binary_accuracy: 0.3944
Epoch 2/20
1130/1130 [==============================] - 1397s 1s/step - loss: 4.1608 - binary_accuracy: 0.4165 - val_loss: 5.0526 - val_binary_accuracy: 0.5194
Epoch 3/20
1130/1130 [==============================] - 1402s 1s/step - loss: 4.1608 - binary_accuracy: 0.4814 - val_loss: 5.1469 - val_binary_accuracy: 0.6361
Epoch 4/20
1130/1130 [==============================] - 1407s 1s/step - loss: 4.1722 - binary_accuracy: 0.4472 - val_loss: 5.0501 - val_binary_accuracy: 0.5583
Epoch 5/20
1130/1130 [==============================] - 1397s 1s/step - loss: 4.1591 - binary_accuracy: 0.4991 - val_loss: 5.0521 - val_binary_accuracy: 0.6028
Epoch 6/20
1130/1130 [==============================] - 1375s 1s/step - loss: 4.1596 - binary_accuracy: 0.5431 - val_loss: 5.0515 - val_binary_accuracy: 0.5917
Epoch 7/20
1130/1130 [==============================] - 1370s 1s/step - loss: 4.1595 - binary_accuracy: 0.4962 - val_loss: 5.0526 - val_binary_accuracy: 0.6000
Epoch 8/20
1130/1130 [==============================] - 1387s 1s/step - loss: 4.1591 - binary_accuracy: 0.5316 - val_loss: 5.0523 - val_binary_accuracy: 0.6028
Epoch 9/20
1130/1130 [==============================] - 1391s 1s/step - loss: 4.1590 - binary_accuracy: 0.4909 - val_loss: 5.0521 - val_binary_accuracy: 0.6028
Epoch 10/20
1130/1130 [==============================] - 1400s 1s/step - loss: 4.1590 - binary_accuracy: 0.5369 - val_loss: 5.0519 - val_binary_accuracy: 0.6028
Epoch 11/20
1130/1130 [==============================] - 1397s 1s/step - loss: 4.1590 - binary_accuracy: 0.4808 - val_loss: 5.0519 - val_binary_accuracy: 0.6028
Epoch 12/20
1130/1130 [==============================] - 1394s 1s/step - loss: 4.1590 - binary_accuracy: 0.5469 - val_loss: 5.0522 - val_binary_accuracy: 0.6028
I would try the label powerset method.
Instead of 3 output nodes, try setting that to the total number of combinations possible as per your labels and dataset. For example, for a multi-label classification with 3 distinct classes, there are 7 possible outputs.
Say, labels are A, B and C. Map output 0 to A, 1 to B, 2 to C, 3 to AB, 4 to AC and so on.
Using a simple transformation function before training and for testing, this problem can be converted to a multi-class, single label problem.
Related
I am new to Keras and have been practicing with resources from the web. Unfortunately, I cannot build a model without it throwing the following error:
ValueError: logits and labels must have the same shape, received ((None, 10) vs (None, 1)).
I have attempted the following:
DF = pd.read_csv("https://raw.githubusercontent.com/EpistasisLab/tpot/master/tutorials/MAGIC%20Gamma%20Telescope/MAGIC%20Gamma%20Telescope%20Data.csv")
X = DF.iloc[:,0:-1]
y = DF.iloc[:,-1]
yBin = np.array([1 if x == 'g' else 0 for x in y ])
scaler = StandardScaler()
X1 = scaler.fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X1, yBin, test_size=0.25, random_state=2018)
print(X_train.__class__,X_test.__class__,y_train.__class__,y_test.__class__ )
model=Sequential()
model.add(Dense(6,activation="relu", input_shape=(10,)))
model.add(Dense(10,activation="softmax"))
model.build(input_shape=(None,1))
model.summary()
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x=X_train,
y=y_train,
epochs=600,
validation_data=(X_test, y_test), verbose=1
)
I have read my model is likely wrong in terms of input parameters, what is the correct approach?
When I look at the shape of your data
print(X_train.shape,X_test.shape,y_train.shape,y_test.shape)
I see, that X is 10-dimensional and y us 1-dimensional
Therefore, you need 10-dimensional input
model.build(input_shape=(None,10))
and 1-dimensional output in the last dense layer
model.add(Dense(1,activation="softmax"))
Target variable yBin/y_train/y_test is 1D array (has a shape (None,1) for a given batch).
Your logits come from the Dense layer and the last Dense layer has 10 neurons with softmax activation. So it will give 10 outputs for each input or (batch_size,10) for each batch. This is represented formally as (None,10).
To resolve the particular shape mismatch issue in question change the neuron count of dense layer to 1 and set activation finction to "sigmoid".
model.add(Dense(1,activation="sigmoid"))
As correctly mentioned by #MSS, You need to use sigmoid activation function with 1 neuron in the last dense layer to match the logits with the labels(1,0) of your dataset which indicates binary class.
Fixed code:
model=Sequential()
model.add(Dense(6,activation="relu", input_shape=(10,)))
model.add(Dense(1,activation="sigmoid"))
#model.build(input_shape=(None,1))
model.summary()
model.compile(optimizer='rmsprop',loss='binary_crossentropy',metrics=['accuracy'])
model.fit(x=X_train,y=y_train,epochs=10,validation_data=(X_test, y_test),verbose=1)
Output:
Epoch 1/10
446/446 [==============================] - 3s 4ms/step - loss: 0.5400 - accuracy: 0.7449 - val_loss: 0.4769 - val_accuracy: 0.7800
Epoch 2/10
446/446 [==============================] - 2s 4ms/step - loss: 0.4425 - accuracy: 0.7987 - val_loss: 0.4241 - val_accuracy: 0.8095
Epoch 3/10
446/446 [==============================] - 2s 3ms/step - loss: 0.4082 - accuracy: 0.8175 - val_loss: 0.4034 - val_accuracy: 0.8242
Epoch 4/10
446/446 [==============================] - 2s 3ms/step - loss: 0.3934 - accuracy: 0.8286 - val_loss: 0.3927 - val_accuracy: 0.8313
Epoch 5/10
446/446 [==============================] - 2s 4ms/step - loss: 0.3854 - accuracy: 0.8347 - val_loss: 0.3866 - val_accuracy: 0.8320
Epoch 6/10
446/446 [==============================] - 2s 4ms/step - loss: 0.3800 - accuracy: 0.8397 - val_loss: 0.3827 - val_accuracy: 0.8364
Epoch 7/10
446/446 [==============================] - 2s 4ms/step - loss: 0.3762 - accuracy: 0.8411 - val_loss: 0.3786 - val_accuracy: 0.8387
Epoch 8/10
446/446 [==============================] - 2s 3ms/step - loss: 0.3726 - accuracy: 0.8432 - val_loss: 0.3764 - val_accuracy: 0.8404
Epoch 9/10
446/446 [==============================] - 2s 3ms/step - loss: 0.3695 - accuracy: 0.8466 - val_loss: 0.3724 - val_accuracy: 0.8408
Epoch 10/10
446/446 [==============================] - 2s 4ms/step - loss: 0.3665 - accuracy: 0.8478 - val_loss: 0.3698 - val_accuracy: 0.8454
<keras.callbacks.History at 0x7f68ca30f670>
I am trying to predict price values from datasets using keras. I am following this tutorial: https://keras.io/examples/structured_data/structured_data_classification_from_scratch/, but when I get to the part of fitting the model, I am getting a huge negative loss and very small accuracy
Epoch 1/50
1607/1607 [==============================] - ETA: 0s - loss: -117944.7500 - accuracy: 3.8897e-05
2022-05-22 11:14:28.922065: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
1607/1607 [==============================] - 15s 10ms/step - loss: -117944.7500 - accuracy: 3.8897e-05 - val_loss: -123246.0547 - val_accuracy: 7.7791e-05
Epoch 2/50
1607/1607 [==============================] - 15s 9ms/step - loss: -117944.7734 - accuracy: 3.8897e-05 - val_loss: -123246.0547 - val_accuracy: 7.7791e-05
Epoch 3/50
1607/1607 [==============================] - 15s 10ms/step - loss: -117939.4844 - accuracy: 3.8897e-05 - val_loss: -123245.9922 - val_accuracy: 7.7791e-05
Epoch 4/50
1607/1607 [==============================] - 16s 10ms/step - loss: -117944.0859 - accuracy: 3.8897e-05 - val_loss: -123245.9844 - val_accuracy: 7.7791e-05
Epoch 5/50
1607/1607 [==============================] - 15s 10ms/step - loss: -117944.7422 - accuracy: 3.8897e-05 - val_loss: -123246.0547 - val_accuracy: 7.7791e-05
Epoch 6/50
1607/1607 [==============================] - 15s 10ms/step - loss: -117944.8203 - accuracy: 3.8897e-05 - val_loss: -123245.9766 - val_accuracy: 7.7791e-05
Epoch 7/50
1607/1607 [==============================] - 15s 10ms/step - loss: -117944.8047 - accuracy: 3.8897e-05 - val_loss: -123246.0234 - val_accuracy: 7.7791e-05
Epoch 8/50
1607/1607 [==============================] - 15s 10ms/step - loss: -117944.7578 - accuracy: 3.8897e-05 - val_loss: -123245.9766 - val_accuracy: 7.7791e-05
Epoch 9/50
This is my graph, as far as the code, it looks like the one from the example but adapted:
# Categorical feature encoded as string
desc = keras.Input(shape=(1,), name="desc", dtype="string")
# Numerical features
date = keras.Input(shape=(1,), name="date")
quant = keras.Input(shape=(1,), name="quant")
all_inputs = [
desc,
quant,
date,
]
# String categorical features
desc_encoded = encode_categorical_feature(desc, "desc", train_ds)
# Numerical features
quant_encoded = encode_numerical_feature(quant, "quant", train_ds)
date_encoded = encode_numerical_feature(date, "date", train_ds)
all_features = layers.concatenate(
[
desc_encoded,
quant_encoded,
date_encoded,
]
)
x = layers.Dense(32, activation="sigmoid")(all_features)
x = layers.Dropout(0.5)(x)
output = layers.Dense(1, activation="relu")(x)
model = keras.Model(all_inputs, output)
model.compile("adam", "binary_crossentropy", metrics=["accuracy"])
And the dataset looks like this:
date desc quant price
0 20140101.0 CARBONATO DE DIMETILO 999.00 1428.57
1 20140101.0 HIDROQUINONA 137.00 1314.82
2 20140101.0 1,5 PENTANODIOL TECN. 495.00 2811.60
3 20140101.0 SOSA CAUSTICA LIQUIDA 50% 567160.61 113109.14
4 20140101.0 BOROHIDRURO SODICO 6.24 299.27
Also I am converting the date from being YYYY-MM-DD to being numbers using:
dataset['date'] = pd.to_datetime(dataset["date"]).dt.strftime("%Y%m%d").astype('float64')
What am I doing wrong? :(
EDIT: I though the encoder function from the tutorial was normalizing data, but it wasnt. Is there any other tutorial that you know guys which can guide me better? The loss problem has been fixed ! (was due to normalization)
You seem to be quite confused by the components of your model.
Binary cross entropy is a classification loss, your problem is regression -> use MSE. Also "accuracy" makes no sense for regression, change it to MSE too.
You data is huge and thus your loss is huge. You have a price of 113109.14 in the data, what if your model is bad initially and says 0? You get a loss of ~100,000^2 = 10,000,000,000. Normalise your data, in your case - the output variable (target, price) to in between -1 and 1
There are some use cases where an output neuron should have an activation function, but unless you know why you are doing this, leaving it as a linear is a much safer choice.
Dropout is a method for regularising your model, do not start with having it, always start with the simplest possible model, and make sure you can learn before trying to maximise test score.
Neural networks will not extrapolate, feeding in an ever growing signal (date) in a raw format almost surely will cause problems.
Simple example task of binary classification. Features are 2D array with values in [-1,1]. (Dataset is from make_circles from sklearn.datasets)
My model :
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(6,
activation= tf.keras.activations.sigmoid,
input_shape=(2,),
kernel_regularizer='l2'))
model.add(tf.keras.layers.Dense(6,
activation= tf.keras.activations.sigmoid,
kernel_regularizer='l2'))
model.add(tf.keras.layers.Dense(1,
activation= tf.keras.activations.softmax))
model.compile (optimizer=tf.keras.optimizers.RMSprop() ,
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.BinaryAccuracy()]) ```
The output is that loss in decreasing while binary accuracy is frozen at 0.5067 during all epochs.
Epoch 1/10
45/45 [==============================] - 1s 8ms/step - loss: 0.7802 - binary_accuracy: 0.5067 - val_loss: 0.7563 - val_binary_accuracy: 0.4400
Epoch 2/10
45/45 [==============================] - 0s 4ms/step - loss: 0.7605 - binary_accuracy: 0.5067 - val_loss: 0.7486 - val_binary_accuracy: 0.4400
Epoch 3/10
45/45 [==============================] - 0s 3ms/step - loss: 0.7487 - binary_accuracy: 0.5067 - val_loss: 0.7425 - val_binary_accuracy: 0.4400
Epoch 4/10
45/45 [==============================] - 0s 3ms/step - loss: 0.7396 - binary_accuracy: 0.5067 - val_loss: 0.7368 - val_binary_accuracy: 0.4400
Epoch 5/10
45/45 [==============================] - 0s 4ms/step - loss: 0.7320 - binary_accuracy: 0.5067 - val_loss: 0.7306 - val_binary_accuracy: 0.4400
Epoch 6/10
45/45 [==============================] - 0s 4ms/step - loss: 0.7254 - binary_accuracy: 0.5067 - val_loss: 0.7269 - val_binary_accuracy: 0.4400
Epoch 7/10
45/45 [==============================] - 0s 3ms/step - loss: 0.7203 - binary_accuracy: 0.5067 - val_loss: 0.7211 - val_binary_accuracy: 0.4400
Epoch 8/10
45/45 [==============================] - 0s 2ms/step - loss: 0.7157 - binary_accuracy: 0.5067 - val_loss: 0.7158 - val_binary_accuracy: 0.4400
Epoch 9/10
45/45 [==============================] - 0s 2ms/step - loss: 0.7118 - binary_accuracy: 0.5067 - val_loss: 0.7116 - val_binary_accuracy: 0.4400
Epoch 10/10
45/45 [==============================] - 0s 2ms/step - loss: 0.7082 - binary_accuracy: 0.5067 - val_loss: 0.7099 - val_binary_accuracy: 0.4400
This is a very basic error that most of us got while starting to learn Binary Classification. The reason your accuracy is frozen is that in the last layer there is only one unit. Since it's a binary classification task, there should of 2 units in the last layer. Your code will be like this
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(6,
activation= tf.keras.activations.sigmoid,
input_shape=(2,),
kernel_regularizer='l2'))
model.add(tf.keras.layers.Dense(6,
activation= tf.keras.activations.sigmoid,
kernel_regularizer='l2'))
model.add(tf.keras.layers.Dense(2,
activation= tf.keras.activations.softmax))
model.compile (optimizer=tf.keras.optimizers.RMSprop() ,
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.BinaryAccuracy()])
Now when you pass your input data from the model, As an output you will get an array(tensor) of length 2 like [0.37, 0.63]. If you take a closer look you will see both the entries of this array sum to 1.0 (0.37+0.63). It actually means your given input belongs to class 1 with a probability of 0.63 and belong to class 0 with a probability 0f 0.37
Whereas In your case (with 1 unit in the final layer) it will only give results like [0.78] or [0.21] which doesn't make any sense regarding which class your input image is belonging to.
Try replacing the softmax activation function with the sigmoid activation function for your last layer:
model.add(tf.keras.layers.Dense(units=1, activation='sigmoid'))
Softmax is used for multiclass classification.
EDIT: Expectedly, a softmax activation on (batch_size, 1) dimensional data gives a tensor of only 1's:
tf.keras.activations.softmax(tf.constant([[-55.], [.1], [102.]))
>> <tf.Tensor: shape=(3, 1), dtype=float32, numpy=
>> array([[1.],
>> [1.],
>> [1.]], dtype=float32)>
However, I'm not sure how your loss is decreasing steadily.
I'm new to Keras and I'm using it to build a normal Neural Network to classify number MNIST dataset.
Beforehand I have already split the data into 3 parts: 55000 to train, 5000 to evaluate and 10000 to test, and I have scaled the pixel density down (by dividing it by 255.0)
My model looks like this:
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28,28]))
model.add(keras.layers.Dense(100, activation='relu'))
model.add(keras.layers.Dense(10, activation='softmax'))
And here is the compile:
model.compile(loss='sparse_categorical_crossentropy',
optimizer = 'Adam',
metrics=['accuracy'])
I train the model:
his = model.fit(xTrain, yTrain, epochs = 20, validation_data=(xValid, yValid))
At first the val_loss decreases, then it increases although the accuracy is increasing.
Train on 55000 samples, validate on 5000 samples
Epoch 1/20
55000/55000 [==============================] - 5s 91us/sample - loss: 0.2822 - accuracy: 0.9199 - val_loss: 0.1471 - val_accuracy: 0.9588
Epoch 2/20
55000/55000 [==============================] - 5s 82us/sample - loss: 0.1274 - accuracy: 0.9626 - val_loss: 0.1011 - val_accuracy: 0.9710
Epoch 3/20
55000/55000 [==============================] - 5s 83us/sample - loss: 0.0899 - accuracy: 0.9734 - val_loss: 0.0939 - val_accuracy: 0.9742
Epoch 4/20
55000/55000 [==============================] - 5s 84us/sample - loss: 0.0674 - accuracy: 0.9796 - val_loss: 0.0760 - val_accuracy: 0.9770
Epoch 5/20
55000/55000 [==============================] - 5s 94us/sample - loss: 0.0541 - accuracy: 0.9836 - val_loss: 0.0842 - val_accuracy: 0.9742
Epoch 15/20
55000/55000 [==============================] - 4s 82us/sample - loss: 0.0103 - accuracy: 0.9967 - val_loss: 0.0963 - val_accuracy: 0.9788
Epoch 16/20
55000/55000 [==============================] - 5s 84us/sample - loss: 0.0092 - accuracy: 0.9973 - val_loss: 0.0956 - val_accuracy: 0.9774
Epoch 17/20
55000/55000 [==============================] - 5s 82us/sample - loss: 0.0081 - accuracy: 0.9977 - val_loss: 0.0977 - val_accuracy: 0.9770
Epoch 18/20
55000/55000 [==============================] - 5s 85us/sample - loss: 0.0076 - accuracy: 0.9977 - val_loss: 0.1057 - val_accuracy: 0.9760
Epoch 19/20
55000/55000 [==============================] - 5s 83us/sample - loss: 0.0063 - accuracy: 0.9980 - val_loss: 0.1108 - val_accuracy: 0.9774
Epoch 20/20
55000/55000 [==============================] - 5s 85us/sample - loss: 0.0066 - accuracy: 0.9980 - val_loss: 0.1056 - val_accuracy: 0.9768
And when I evaluate the loss is too high:
model.evaluate(xTest, yTest)
Result:
10000/10000 [==============================] - 0s 41us/sample - loss: 25.7150 - accuracy: 0.9740
[25.714989705941953, 0.974]
Is this ok, or is it a sign of overfitting? Should I do something to improve it? Thanks in advance.
Usually, it is not Ok. You want the loss rate to be as small as possible. Your result is typical for overfitting. Your Network 'knows' its training data, but isn't capable of analysing new Images. You may want to add some layers. Maybe Convolutional Layers, Dropout Layer... another idea would be to augment your training images. The ImageDataGenerator-Class provided by Keras might help you out here
Another thing to look at could be your hyperparameters. Why do you use 100 nodes in the first dense layer? maybe something like 784 (28*28) seems more interesting if you want to start with a dense layer. I would suggest some combination of Convolutional-Dropout-Dense. Then your dense -layer maybe doesn't need that many nodes...
How do I define a custom keras metric for computing accuracy like so,
y_true = [12.5, 45.5]
y_predicted = [14.5, 29]
splits = [-float("inf"), 10, 20, 30, float("inf")]
"""
Splits to Classes translation =>
Class 0: -inf to 9
Class 1: 10 to 19
Class 2: 20 to 29
Class 3: 30 to inf
"""
# using the above translation,
y_true_classes = [1, 3]
y_predicted_classes = [1, 2]
accuracy = K.equal( y_true_classes, y_predicted_classes ) # => 0.5 here
return accuracy
Here is an idea on how you might you around implementing this (although probably not the best one).
def convert_to_classes(vals, splits):
out = tf.zeros_like(vals, dtype=tf.int32)
for split in splits:
out = tf.where(vals > split, out + 1, out)
return out
def my_acc(splits):
def custom_acc(y_true, y_pred):
y_true = convert_to_classes(y_true, splits)
y_pred = convert_to_classes(y_pred, splits)
return K.mean(K.equal(y_true, y_pred))
return custom_acc
The function convert_to_classes converts the floats to bucks, assuming the bounds are always +-inf.
The closure my_acc lets you define the splits (without +-inf) at compile time (added statically to the graph), and then returns a metric function as expected with keras.
Testing using tensorflow:
y_true = tf.constant([12.5, 45.5])
y_pred = tf.constant([14.5, 29])
with tf.Session() as sess:
print(sess.run(my_acc((10, 20, 30))(y_true, y_pred)))
gives the expected 0.5 accuracy.
And quick test with Keras:
x = np.random.randn(100, 10)*100
y = np.random.randn(100)*100
model = Sequential([Dense(20, activation='relu'),
Dense(1, activation=None)])
model.compile(optimizer='Adam',
loss='mse',
metrics=[my_acc(splits=(10, 20, 30))])
model.fit(x, y, batch_size=32, epochs=10)
Given the metric (named as the inner function in the closure custom_acc)
100/100 [==============================] - 0s 2ms/step - loss: 10242.2591 - custom_acc: 0.4300
Epoch 2/10
100/100 [==============================] - 0s 53us/step - loss: 10101.9658 - custom_acc: 0.4200
Epoch 3/10
100/100 [==============================] - 0s 53us/step - loss: 10011.4662 - custom_acc: 0.4300
Epoch 4/10
100/100 [==============================] - 0s 51us/step - loss: 9899.7181 - custom_acc: 0.4300
Epoch 5/10
100/100 [==============================] - 0s 50us/step - loss: 9815.1607 - custom_acc: 0.4200
Epoch 6/10
100/100 [==============================] - 0s 74us/step - loss: 9736.5554 - custom_acc: 0.4300
Epoch 7/10
100/100 [==============================] - 0s 50us/step - loss: 9667.0845 - custom_acc: 0.4400
Epoch 8/10
100/100 [==============================] - 0s 58us/step - loss: 9589.5439 - custom_acc: 0.4400
Epoch 9/10
100/100 [==============================] - 0s 61us/step - loss: 9511.8003 - custom_acc: 0.4400
Epoch 10/10
100/100 [==============================] - 0s 51us/step - loss: 9443.9730 - custom_acc: 0.4400