Writing a Bayesifier Keras wrapper - python

I'm trying to add Bayesian weight uncertainty to my Keras layers. After some coding I realized that Bayesifier could be implemented as a wrapper for any kind of layer, but I'm facing one issue.
A primer on bayesian weight uncertainty:
Consider an ANN layer with m learnable parameters. A Bayesian version of this layer with a Gaussian distribution on the weights would store another set of m parameters. The first m would be considered means, the new set of m would be considered variances for the weights, so every weight is characterized by a normal distribution.
In the learning phase we sample weights from this distribution. Using the reparametrization trick, we can still do backpropagation.
As you can see, converting a layer to a bayesian layer does not depend on the layer's architecture. We just duplicate the weights.
So I built a Keras wrapper class like so:
from keras.layers.wrappers import Wrapper
from keras import backend as K
K.set_learning_phase(1)
class Bayesify(Wrapper):
def __init__(self, base_layer,
variational_initializer="ones",
variational_regularizer=None,
variational_constraint=None,
**kwargs):
super().__init__(base_layer, **kwargs)
self.variational_initializer = variational_initializer
self.variational_regularizer = variational_regularizer
self.variational_constraint = variational_constraint
self.mean = []
self.variation = []
def build(self, input_shape=None):
super().build(input_shape)
if not self.layer.built:
self.layer.build(input_shape)
self.layer.built = True
self.mean = self.layer.trainable_weights[:]
for tensor in self.mean:
self.variation.append(self.add_weight(
name="variation",
shape=tensor.shape,
initializer=self.variational_initializer,
regularizer=self.variational_regularizer,
constraint=self.variational_constraint
))
self._trainable_weights = self.mean + self.variation
def _sample_weights(self):
return [mean + K.log(1. + K.exp(log_stddev))*K.random_normal(shape=mean.shape)
for mean, log_stddev in zip(self.mean, self.variation)]
def call(self, inputs, **kwargs):
self.layer._trainable_weights = K.in_train_phase(self._sample_weights(), self.mean)
return self.layer.call(inputs, **kwargs)
Short script to run a bayesian Dense layer on MNIST:
import numpy as np
from keras.models import Model
from keras.layers import Dense, Input
from keras.datasets import mnist
from bayesify import Bayesify
(lX, lY), (tX, tY) = mnist.load_data()
lX, tX = lX / 255., tX / 255.
onehot = np.eye(10)
lY, tY = onehot[lY], onehot[tY]
inputs = Input(lX.shape[1:])
x = Bayesify(Dense(30, activation="tanh"))(inputs)
x = Dense(lY.shape[1], activation="softmax")(x)
ann = Model(inputs=inputs, outputs=x)
ann.compile(optimizer="adam", loss="categorical_crossentropy")
ann.fit(lX, lY, batch_size=64, epochs=10, validation_data=(tX, tY))
The learning script raises weird internal Keras exceptions which I'm trying to figure out now.
Also I'm not entirely sure that the gradients of the wrapper's weights will be handled correctly.
I am currently pushing the sampled weights into the wrapped layer using the setter of the layer.trainable_weights property and simply forwarding call() to layer.call().
One such error:
File "/data/Prog/PyCharm/bayesforge/xp_mnist.py", line 20, in <module>
ann.fit(lX, lY, batch_size=64, epochs=10, validation_data=(tX, tY))
File "/usr/lib/python3.6/site-packages/keras/engine/training.py", line
1630, in fit
batch_size=batch_size)
File "/usr/lib/python3.6/site-packages/keras/engine/training.py", line
1480, in _standardize_user_data
exception_prefix='target')
File "/usr/lib/python3.6/site-packages/keras/engine/training.py", line
113, in _standardize_input_data
'with shape ' + str(data_shape))
ValueError: Error when checking target: expected dense_2 to have 3
dimensions, but got array with shape (60000, 10)
One small note, I only tested my scripts with the tensorflow backend.
Does anyone have any suggestion on how this wrapping could be accomplished?

Related

mat1 and mat2 shapes cannot be multiplied for GRU

I am creating a GRU to do some classification for a project, and I'm relatively new to Pytorch and implementing GRUs. I know similar questions like this one have been answered already but I can't seem to bring the same solution over to my own problem. I understand that there is an issue with the shape/order of my fc arrays but after trying to change things I can no longer see the trees for the wood. I would appreciate it if someone could point me in the right direction.
Below I have attached my code and the error. The datasets im using contain 24 features with a label in the 25th column.
# Imports
import pandas as pd
import numpy as np
import torch
import torchvision # torch package for vision related things
import torch.nn.functional as F # Parameterless functions, like (some) activation functions
import torchvision.datasets as datasets # Standard datasets
import torchvision.transforms as transforms # Transformations we can perform on our dataset for augmentation
from torch import optim # For optimizers like SGD, Adam, etc.
from torch import nn # All neural network modules
from torch.utils.data import Dataset, DataLoader # Gives easier dataset managment by creating mini batches etc.
from tqdm import tqdm # For a nice progress bar
from sklearn.preprocessing import StandardScaler
# Set device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Hyperparameters
input_size = 24
hidden_size = 128
num_layers = 1
num_classes = 2
sequence_length = 1
learning_rate = 0.005
batch_size = 8
num_epochs = 3
# Recurrent neural network with GRU (many-to-one)
class RNN_GRU(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, num_classes):
super(RNN_GRU, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.gru = nn.GRU(input_size, hidden_size, num_layers, batch_first=True)
self.fc = nn.Linear(hidden_size * sequence_length, num_classes)
def forward(self, x):
# Set initial hidden and cell states
x = x.unsqueeze(0)
h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device)
# Forward propagate LSTM
out, _ = self.gru(x, h0)
out = out.reshape(out.shape[0], -1)
# Decode the hidden state of the last time step
out = self.fc(out)
return out
class MyDataset(Dataset):
def __init__(self,file_name):
stats_df=pd.read_csv(file_name)
x=stats_df.iloc[:,0:24].values
y=stats_df.iloc[:,24].values
self.x_train=torch.tensor(x,dtype=torch.float32)
self.y_train=torch.tensor(y,dtype=torch.float32)
def __len__(self):
return len(self.y_train)
def __getitem__(self,idx):
return self.x_train[idx],self.y_train[idx]
nomDs=MyDataset("nomStats.csv")
atkDs=MyDataset("atkStats.csv")
train_loader=DataLoader(dataset=nomDs,batch_size=batch_size)
test_loader=DataLoader(dataset=atkDs,batch_size=batch_size)
# Initialize network (try out just using simple RNN, or GRU, and then compare with LSTM)
model = RNN_GRU(input_size, hidden_size, num_layers, num_classes).to(device)
# Loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
# Train Network
for epoch in range(num_epochs):
for batch_idx, (data, targets) in enumerate(tqdm(train_loader)):
# Get data to cuda if possible
data = data.to(device=device).squeeze(1)
targets = targets.to(device=device)
# forward
scores = model(data)
loss = criterion(scores, targets)
# backward
optimizer.zero_grad()
loss.backward()
# gradient descent update step/adam step
optimizer.step()
# Check accuracy on training & test to see how good our model
def check_accuracy(loader, model):
num_correct = 0
num_samples = 0
# Set model to eval
model.eval()
with torch.no_grad():
for x, y in loader:
x = x.to(device=device).squeeze(1)
y = y.to(device=device)
scores = model(x)
_, predictions = scores.max(1)
num_correct += (predictions == y).sum()
num_samples += predictions.size(0)
# Toggle model back to train
model.train()
return num_correct / num_samples
print(f"Accuracy on training set: {check_accuracy(train_loader, model)*100:2f}")
print(f"Accuracy on test set: {check_accuracy(test_loader, model)*100:.2f}")
Traceback (most recent call last):
File "TESTGRU.py", line 87, in <module>
scores = model(data)
File "C:\Users\steph\anaconda3\envs\FYP\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "TESTGRU.py", line 47, in forward
out = self.fc(out)
File "C:\Users\steph\anaconda3\envs\FYP\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\steph\anaconda3\envs\FYP\lib\site-packages\torch\nn\modules\linear.py", line 94, in forward
return F.linear(input, self.weight, self.bias)
File "C:\Users\steph\anaconda3\envs\FYP\lib\site-packages\torch\nn\functional.py", line 1753, in linear
return torch._C._nn.linear(input, weight, bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x1024 and 128x2)
It seems like these lines
# Forward propagate LSTM
out, _ = self.gru(x, h0)
out = out.reshape(out.shape[0], -1)
are the problem.
It appears that you only want to feed the hidden state of the last time step.
This could be read from the output in two ways:
If you want the output of all layers at the last time step, you should use the second return value of out, _ = self.gru(x, h0) not the first.
If you want to use just the last layer's output at the last time step (which seems to be the case), you should use
out[:, -1, :]. With this change, you may not need the
reshape operation.

Why am I receiving 'ValueError: This model has not yet been built'? [Following tutorial] [duplicate]

I've keras model defined as follow
class ConvLayer(Layer) :
def __init__(self, nf, ks=3, s=2, **kwargs):
self.nf = nf
self.grelu = GeneralReLU(leak=0.01)
self.conv = (Conv2D(filters = nf,
kernel_size = ks,
strides = s,
padding = "same",
use_bias = False,
activation = "linear"))
super(ConvLayer, self).__init__(**kwargs)
def rsub(self): return -self.grelu.sub
def set_sub(self, v): self.grelu.sub = -v
def conv_weights(self): return self.conv.weight[0]
def build(self, input_shape):
# No weight to train.
super(ConvLayer, self).build(input_shape) # Be sure to call this at the end
def compute_output_shape(self, input_shape):
output_shape = (input_shape[0],
input_shape[1]/2,
input_shape[2]/2,
self.nf)
return output_shape
def call(self, x):
return self.grelu(self.conv(x))
def __repr__(self):
return f'ConvLayer(nf={self.nf}, activation={self.grelu})'
class ConvModel(tf.keras.Model):
def __init__(self, nfs, input_shape, output_shape, use_bn=False, use_dp=False):
super(ConvModel, self).__init__(name='mlp')
self.use_bn = use_bn
self.use_dp = use_dp
self.num_classes = num_classes
# backbone layers
self.convs = [ConvLayer(nfs[0], s=1, input_shape=input_shape)]
self.convs += [ConvLayer(nf) for nf in nfs[1:]]
# classification layers
self.convs.append(AveragePooling2D())
self.convs.append(Dense(output_shape, activation='softmax'))
def call(self, inputs):
for layer in self.convs: inputs = layer(inputs)
return inputs
I'm able to compile this model without any issues
>>> model.compile(optimizer=tf.keras.optimizers.Adam(lr=lr),
loss='categorical_crossentropy',
metrics=['accuracy'])
But when I query the summary for this model, I see this error
>>> model = ConvModel(nfs, input_shape=(32, 32, 3), output_shape=num_classes)
>>> model.summary()
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-220-5f15418b3570> in <module>()
----> 1 model.summary()
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/network.py in summary(self, line_length, positions, print_fn)
1575 """
1576 if not self.built:
-> 1577 raise ValueError('This model has not yet been built. '
1578 'Build the model first by calling `build()` or calling '
1579 '`fit()` with some data, or specify '
ValueError: This model has not yet been built. Build the model first by calling `build()` or calling `fit()` with some data, or specify an `input_shape` argument in the first layer(s) for automatic build.
I'm providing input_shape for the first layer of my model, why is throwing this error?
The error says what to do:
This model has not yet been built. Build the model first by calling build()
model.build(input_shape) # `input_shape` is the shape of the input data
# e.g. input_shape = (None, 32, 32, 3)
model.summary()
There is a very big difference between keras subclassed model and other keras models (Sequential and Functional).
Sequential models and Functional models are datastructures that represent a DAG of layers. In simple words, Functional or Sequential model are static graphs of layers built by stacking one on top of each other like LEGO. So when you provide input_shape to first layer, these (Functional and Sequential) models can infer shape of all other layers and build a model. Then you can print input/output shapes using model.summary().
On the other hand, subclassed model is defined via the body (a call method) of Python code. For subclassed model, there is no graph of layers here. We cannot know how layers are connected to each other (because that's defined in the body of call, not as an explicit data structure), so we cannot infer input / output shapes. So for a subclass model, the input/output shape is unknown to us until it is first tested with proper data. In the compile() method, we will do a deferred compile and wait for a proper data. In order for it to infer shape of intermediate layers, we need to run with a proper data and then use model.summary(). Without running the model with a data, it will throw an error as you noticed. Please check GitHub gist for complete code.
The following is an example from Tensorflow website.
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
class ThreeLayerMLP(keras.Model):
def __init__(self, name=None):
super(ThreeLayerMLP, self).__init__(name=name)
self.dense_1 = layers.Dense(64, activation='relu', name='dense_1')
self.dense_2 = layers.Dense(64, activation='relu', name='dense_2')
self.pred_layer = layers.Dense(10, name='predictions')
def call(self, inputs):
x = self.dense_1(inputs)
x = self.dense_2(x)
return self.pred_layer(x)
def get_model():
return ThreeLayerMLP(name='3_layer_mlp')
model = get_model()
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') / 255
x_test = x_test.reshape(10000, 784).astype('float32') / 255
model.compile(loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.RMSprop())
model.summary() # This will throw an error as follows
# ValueError: This model has not yet been built. Build the model first by calling `build()` or calling `fit()` with some data, or specify an `input_shape` argument in the first layer(s) for automatic build.
# Need to run with real data to infer shape of different layers
history = model.fit(x_train, y_train,
batch_size=64,
epochs=1)
model.summary()
Thanks!
Another method is to add the attribute input_shape() like this:
model = Sequential()
model.add(Bidirectional(LSTM(n_hidden,return_sequences=False, dropout=0.25,
recurrent_dropout=0.1),input_shape=(n_steps,dim_input)))
# X is a train dataset with features excluding a target variable
input_shape = X.shape
model.build(input_shape)
model.summary()
Make sure you create your model properly. A small typo mistake like the following code may also cause a problem:
model = Model(some-input, some-output, "model-name")
while the correct code should be:
model = Model(some-input, some-output, name="model-name")
If your Tensorflow, Keras version is 2.5.0 then just add Tensorflow when you import Keras package
Not this:
from tensorflow import keras
from keras.models import Sequential
import tensorflow as tf
Like this:
from tensorflow import keras
from tensorflow.keras.models import Sequential
import tensorflow as tf
Version issues of your Tensorflow, Keras, can be the reason for this.
Same problem I encountered during training for the LSTM model for regression.
Error:
ValueError: This model has not yet been built. Build the model first
by calling build() or by calling the model on a batch of data.
Earlier:
from tensorflow.keras.models import Sequential
from tensorflow.python.keras.models import Sequential
Corrected:
from keras.models import Sequential
I was also facing same error, so I have removed model.summary(). Then issue is resolved. As it arises if model of summary is defined before the model is built.
Here is the LINK for description which states that
Raises:
ValueError: if `summary()` is called before the model is built.**

How to pass the input tensor of a model to a loss function?

My goal is to create a custom loss function that calculates the loss based on y_true, y_pred and the tensor of the models input layer:
import numpy as np
from tensorflow import keras as K
input_shape = (16, 16, 1)
input = K.layers.Input(input_shape)
dense = K.layers.Dense(16)(input)
output = K.layers.Dense(1)(dense)
model = K.Model(inputs=input, outputs=output)
def CustomLoss(y_true, y_pred):
return K.backend.sum(K.backend.abs(y_true - model.input * y_pred))
model.compile(loss=CustomLoss)
model.fit(np.ones(input_shape), np.zeros(input_shape))
However, this code fails with the following error message:
TypeError: Cannot convert a symbolic Keras input/output to a numpy array. This error may indicate that you're trying to pass a symbolic value to a NumPy call, which is not supported. Or, you may be trying to pass Keras symbolic inputs/outputs to a TF API that does not register dispatching, preventing Keras from automatically converting the API call to a lambda layer in the Functional Model.
How can I pass the input tensor of my model to the loss function?
Tensorflow Version: 2.4.1
Python Version: 3.8.8
You can use add_loss to pass external layers to your loss. Here an example:
import numpy as np
from tensorflow import keras as K
def CustomLoss(y_true, y_pred, input_l):
return K.backend.sum(K.backend.abs(y_true - input_l * y_pred))
input_shape = (16, 16, 1)
n_sample = 10
X = np.random.uniform(0,1, (n_sample,) + input_shape)
y = np.random.uniform(0,1, (n_sample,) + input_shape)
inp = K.layers.Input(input_shape)
dense = K.layers.Dense(16)(inp)
out = K.layers.Dense(1)(dense)
target = K.layers.Input(input_shape)
model = K.Model(inputs=[inp,target], outputs=out)
model.add_loss( CustomLoss( target, out, inp ) )
model.compile(loss=None, optimizer='adam')
model.fit(x=[X,y], y=None, epochs=3)
To use the model in inference mode (removing the target from inputs)
final_model = K.Model(model.input[0], model.output)
final_model.predict(X)

ValueError: `decode_predictions` expects a batch of predictions (i.e. a 2D array of shape (samples, 1000)). Found array with shape: (1, 7)

I am using VGG16 with keras for transfer learning (I have 7 classes in my new model) and as such I want to use the build-in decode_predictions method to output the predictions of my model. However, using the following code:
preds = model.predict(img)
decode_predictions(preds, top=3)[0]
I receive the following error message:
ValueError: decode_predictions expects a batch of predictions (i.e. a 2D array of shape (samples, 1000)). Found array with shape: (1, 7)
Now I wonder why it expects 1000 when I only have 7 classes in my retrained model.
A similar question I found here on stackoverflow (Keras: ValueError: decode_predictions expects a batch of predictions
) suggests to include 'inlcude_top=True' upon model definition to solve this problem:
model = VGG16(weights='imagenet', include_top=True)
I have tried this, however it is still not working - giving me the same error as before. Any hint or suggestion on how to solve this issue is highly appreciated.
i suspect you are using some pre-trained model, let's say for instance resnet50 and you are importing decode_predictions like this:
from keras.applications.resnet50 import decode_predictions
decode_predictions transform an array of (num_samples, 1000) probabilities to class name of original imagenet classes.
if you want to transer learning and classify between 7 different classes you need to do it like this:
base_model = resnet50 (weights='imagenet', include_top=False)
# add a global spatial average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)
# add a fully-connected layer
x = Dense(1024, activation='relu')(x)
# and a logistic layer -- let's say we have 7 classes
predictions = Dense(7, activation='softmax')(x)
model = Model(inputs=base_model.input, outputs=predictions)
...
after fitting the model and calculate predictions you have to manually assign the class name to output number without using imported decode_predictions
Overloading of 'decode_predictions' function.Comment out the 1000 classes constraints of original function:
CLASS_INDEX = None
#keras_modules_injection
def test_my_decode_predictions(*args, **kwargs):
return my_decode_predictions(*args, **kwargs)
def my_decode_predictions(preds, top=5, **kwargs):
global CLASS_INDEX
backend, _, _, keras_utils = get_submodules_from_kwargs(kwargs)
# if len(preds.shape) != 2 or preds.shape[1] != 1000:
# raise ValueError('`decode_predictions` expects '
# 'a batch of predictions '
# '(i.e. a 2D array of shape (samples, 1000)). '
# 'Found array with shape: ' + str(preds.shape))
if CLASS_INDEX is None:
fpath = keras_utils.get_file(
'imagenet_class_index.json',
CLASS_INDEX_PATH,
cache_subdir='models',
file_hash='c2c37ea517e94d9795004a39431a14cb')
with open(fpath) as f:
CLASS_INDEX = json.load(f)
results = []
for pred in preds:
top_indices = pred.argsort()[-top:][::-1]
result = [tuple(CLASS_INDEX[str(i)]) + (pred[i],) for i in top_indices]
result.sort(key=lambda x: x[2], reverse=True)
results.append(result)
return results
print('Predicted: ', test_my_decode_predictions(pred, top=10))

Custom weighted loss function in Keras for weighing each element

I'm trying to create a simple weighted loss function.
Say, I have input dimensions 100 * 5, and output dimensions also 100 * 5. I also have a weight matrix of the same dimension.
Something like the following:
import numpy as np
train_X = np.random.randn(100, 5)
train_Y = np.random.randn(100, 5)*0.01 + train_X
weights = np.random.randn(*train_X.shape)
Defining the custom loss function
def custom_loss_1(y_true, y_pred):
return K.mean(K.abs(y_true-y_pred)*weights)
Defining the model
from keras.layers import Dense, Input
from keras import Model
import keras.backend as K
input_layer = Input(shape=(5,))
out = Dense(5)(input_layer)
model = Model(input_layer, out)
Testing with existing metrics works fine
model.compile('adam','mean_absolute_error')
model.fit(train_X, train_Y, epochs=1)
Testing with our custom loss function doesn't work
model.compile('adam',custom_loss_1)
model.fit(train_X, train_Y, epochs=10)
It gives the following stack trace:
InvalidArgumentError (see above for traceback): Incompatible shapes: [32,5] vs. [100,5]
[[Node: loss_9/dense_8_loss/mul = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](loss_9/dense_8_loss/Abs, loss_9/dense_8_loss/mul/y)]]
Where is the number 32 coming from?
Testing a loss function with weights as Keras tensors
def custom_loss_2(y_true, y_pred):
return K.mean(K.abs(y_true-y_pred)*K.ones_like(y_true))
This function seems to do the work. So, probably suggests that a Keras tensor as a weight matrix would work. So, I created another version of the loss function.
Loss function try 3
from functools import partial
def custom_loss_3(y_true, y_pred, weights):
return K.mean(K.abs(y_true-y_pred)*K.variable(weights, dtype=y_true.dtype))
cl3 = partial(custom_loss_3, weights=weights)
Fitting data using cl3 gives the same error as above.
InvalidArgumentError (see above for traceback): Incompatible shapes: [32,5] vs. [100,5]
[[Node: loss_11/dense_8_loss/mul = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](loss_11/dense_8_loss/Abs, loss_11/dense_8_loss/Variable/read)]]
I wonder what I'm missing! I could have used the notion of sample_weight in Keras; but then I'd have to reshape my inputs to a 3d vector.
I thought that this custom loss function should really have been trivial.
In model.fit the batch size is 32 by default, that's where this number is coming from. Here's what's happening:
In custom_loss_1 the tensor K.abs(y_true-y_pred) has shape (batch_size=32, 5), while the numpy array weights has shape (100, 5). This is an invalid multiplication, since the dimensions don't agree and broadcasting can't be applied.
In custom_loss_2 this problem doesn't exist because you're multiplying 2 tensors with the same shape (batch_size=32, 5).
In custom_loss_3 the problem is the same as in custom_loss_1, because converting weights into a Keras variable doesn't change their shape.
UPDATE: It seems you want to give a different weight to each element in each training sample, so the weights array should have shape (100, 5) indeed.
In this case, I would input your weights' array into your model and then use this tensor within the loss function:
import numpy as np
from keras.layers import Dense, Input
from keras import Model
import keras.backend as K
from functools import partial
def custom_loss_4(y_true, y_pred, weights):
return K.mean(K.abs(y_true - y_pred) * weights)
train_X = np.random.randn(100, 5)
train_Y = np.random.randn(100, 5) * 0.01 + train_X
weights = np.random.randn(*train_X.shape)
input_layer = Input(shape=(5,))
weights_tensor = Input(shape=(5,))
out = Dense(5)(input_layer)
cl4 = partial(custom_loss_4, weights=weights_tensor)
model = Model([input_layer, weights_tensor], out)
model.compile('adam', cl4)
model.fit(x=[train_X, weights], y=train_Y, epochs=10)

Categories

Resources