I have a NN with this structure:
I need to make it predict ~ 0, when input -> ∞
To implement this, I decided to add model prediction with large input to the loss function (If there is another way, will be happy to hear)
But I don't see the way to predict values inside my loss function
Network code:
from keras.models import Model
from keras.layers import Input, Dense, Add
input_array = []
output_array = []
for i in range(14):
input_layer = Input(shape=(1,))
hidden1 = Dense(64, activation='relu')(input_layer)
hidden2 = Dense(64, activation='relu')(hidden1)
output_layer = Dense(1, activation='linear')(hidden2)
input_array.append(input_layer)
output_array.append(output_layer)
# merge input models
summation = Add()(output_array)
model = Model(inputs=input_array, outputs=summation)
Related
I was wondering if it is possible to create a customized network structure where the input layer has an extra connection to a hidden layer that is not adjacent to the input layer by using tensorflow. As an example, suppose I have a simple network structure as shown below.
import numpy as np
import random
import tensorflow as tf
from tensorflow import keras
m = 200
n = 5
my_input= np.random.random([m,n])
my_output = np.random.random([m,1])
my_model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(my_input.shape[1],)),
tf.keras.layers.Dense(32, activation='softmax'),
tf.keras.layers.Dense(32, activation='tanh'),
tf.keras.layers.Dense(32, activation='relu'),
tf.keras.layers.Dense(1)
])
my_model.compile(loss='mse',optimizer = tf.keras.optimizers.Adam(learning_rate=0.001))
res = my_model.fit(my_input, my_output, epochs=50, batch_size=1,verbose=0)
Is there a way that the first layer having the input values can have an extra connection to the third layer that has the ReLU activation? While doing so, I'd like to have different constraints in each connection. For example, for the connection coming from the previous layer, I'd like to use GlorotNormal as my weight initialization. As for the extra connection coming from the input layer, I'd like to use HeUniform initialization.
I tried to visualize what I have in mind below.
Use the Keras functional API and tf.concat:
import numpy as np
import random
import tensorflow as tf
from tensorflow import keras
m = 200
n = 5
my_input= np.random.random([m,n])
my_output = np.random.random([m,1])
inputs = tf.keras.layers.Input((my_input.shape[1],))
x = tf.keras.layers.Flatten()(inputs)
x = tf.keras.layers.Dense(32, activation='softmax')(x)
x = tf.keras.layers.Dense(32, activation='tanh', kernel_initializer=tf.keras.initializers.GlorotNormal())(x)
y = tf.keras.layers.Dense(my_input.shape[1], kernel_initializer=tf.keras.initializers.HeUniform())(inputs)
x = tf.keras.layers.Dense(32, activation='relu')(tf.concat([x, y], axis=1))
outputs = tf.keras.layers.Dense(1)(x)
my_model = tf.keras.Model(inputs, outputs)
dot_img_file = 'model_1.png'
tf.keras.utils.plot_model(my_model, to_file=dot_img_file, show_shapes=True)
my_model.compile(loss='mse',optimizer = tf.keras.optimizers.Adam(learning_rate=0.001))
res = my_model.fit(my_input, my_output, epochs=50, batch_size=1,verbose=0)
I'm creating an Ensemble of Vgg19, DenseNet, and EfficientNetB1.
The code is as follows:
IMAGE_SIZE = (224,224,3)
import tensorflow as tf
vgg19 = tf.keras.applications.vgg19.VGG19(
input_shape=IMAGE_SIZE, weights='imagenet', include_top=False)
for layer in vgg19.layers:
layer._name = layer._name + str('_19')
layer.trainable = False
effnetb1 =tf.keras.applications.efficientnet.EfficientNetB1(
include_top=False, weights='imagenet', input_shape=IMAGE_SIZE)
for layer in effnetb1.layers:
layer._name = layer._name + str('_B1')
layer.trainable=False
densenet=tf.keras.applications.densenet.DenseNet121(
include_top=False, weights="imagenet", input_shape=IMAGE_SIZE)
for layer in densenet.layers:
layer._name = layer._name + str('_Dense')
layer.trainable=False
from keras.layers import Input, Flatten, Concatenate, Dense, Average, Dropout
inp = Input(IMAGE_SIZE)
vgg19_x = Flatten()(vgg19(inp))
vgg19_x = Dense(256, activation='relu')(vgg19_x)
effnet_x = Flatten()(effnetb1(inp))
effnet_x = Dense(256, activation='relu')(effnet_x)
densenet_x = Flatten()(densenet(inp))
densenet_x = Dense(256, activation='relu')(densenet_x)
from keras.models import Model
x = Concatenate()([vgg19_x, effnet_x, densenet_x])
x = Dense(128, activation='relu')(x)
x = Dropout(0.30)(x)
x = Dense(64, activation='relu')(x)
out = Dense(2, activation='softmax')(x)
model = Model(inputs = inp, outputs = out)
model.compile(
loss='categorical_crossentropy',
optimizer=tf.keras.optimizers.Adam(
learning_rate=0.0005,
name="Adam"),
metrics=['accuracy']
)
model.summary()
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
checkpointer = ModelCheckpoint(filepath="/content/drive/MyDrive/ensemble/ensemble-weights.hdf5", verbose=1, save_best_only=True)
r = model.fit(
training_set,
validation_data=test_set,
epochs=30,
steps_per_epoch=len(training_set),
validation_steps=len(test_set),
callbacks = [checkpointer]
)
The code runs fine and the training is successfully taking place when I'm not using the callback. But when I use a ModelCheckpoint, I get the following error after 1st epoch:
ValueError: The target structure is of type `<class 'keras.engine.keras_tensor.KerasTensor'>`
KerasTensor(type_spec=TensorSpec(shape=(None, 224, 224, 3), dtype=tf.float32, name='input_5'), name=...
However, the input structure is a sequence (<class 'list'>) of length 0.
[]
nest cannot guarantee that it is safe to map one to the other.
Can anyone tell me what's wrong here? Also, is it because I'm concatenating three models?
Your help will be appreciated. Thank you!
I also ran into this issue while trying to implement a nested model (which is what would be constructed here after you create the concatenated model).
The issue seems to be that Keras cannot handle the inputs and outputs of nested models in newer tensorflow versions(tf 2.0 and above). Depending on the version you are on, you might want to either explicitly refer the input/output of the nested model you are using. In tf2.6, what seems to work is to define separate models for each part - ie - the common layers added after concatenation should also be wrapped in a model like below (taken from here):
#Make GradCAM heatmap following the Keras tutorial.
last_conv_layer = model.layers[-4].layers[-1]
last_conv_layer_model = keras.Model(model.layers[-4].inputs, last_conv_layer.output)
# Second, we create a model that maps the activations of the last conv
# layer to the final class predictions
classifier_input = keras.Input(shape=last_conv_layer.output.shape[1:])
x = classifier_input
for layer in model.layers[-3:]:
x = layer(x)
classifier_model = keras.Model(classifier_input, x)
#Preparing the image with the preprocessing layers
preprocess_layers = keras.Model(model.inputs, model.layers[-5].output)
img_array = preprocess_layers(prepared_image)
# Then, we compute the gradient of the top predicted class for our input image
# with respect to the activations of the last conv layer
with tf.GradientTape() as tape:
# Compute activations of the last conv layer and make the tape watch it
last_conv_layer_output = last_conv_layer_model(img_array)
tape.watch(last_conv_layer_output)
# Compute class predictions
preds = classifier_model(last_conv_layer_output)
top_pred_index = tf.argmax(preds[0])
top_class_channel = preds[:, top_pred_index]
# This is the gradient of the top predicted class with regard to
# the output feature map of the last conv layer
grads = tape.gradient(top_class_channel, last_conv_layer_output)
You can also check the following github issues (they are not very related, but deal with a similar problem) - issue1, issue2, issue3
ValueError: Can not squeeze dim[1], expected a dimension of 1, got 3 for 'metrics/sparse_categorical_accuracy/Squeeze' (op: 'Squeeze') with input shapes: [?,3].
The Iris dataset
In this assignment, you will use the Iris dataset. It consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor). Four features were measured from each sample: the length and the width of the sepals and petals, in centimeters. For a reference, see the following papers:
R. A. Fisher. "The use of multiple measurements in taxonomic problems". Annals of Eugenics. 7 (2): 179–188, 1936.
Your goal is to construct a neural network that classifies each sample into the correct class, as well as applying validation and regularisation techniques.
Load and preprocess the data
First read in the Iris dataset using datasets.load_iris(), and split the dataset into training and test sets.
You can now construct a model to fit to the data. Using the Sequential API, build your model according to the following specifications:
The model should use the input_shape in the function argument to set the input size in the first layer.
The first layer should be a dense layer with 64 units.
The weights of the first layer should be initialised with the He uniform initializer.
The biases of the first layer should be all initially equal to one.
There should then be a further four dense layers, each with 128 units.
This should be followed with four dense layers, each with 64 units.
All of these Dense layers should use the ReLU activation function.
The output Dense layer should have 3 units and the softmax activation function.
In total, the network should have 10 layers.
from numpy.random import seed
seed(8)
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets, model_selection
get_ipython().run_line_magic('matplotlib', 'inline')
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten, Softmax
def read_in_and_split_data(iris_data):
return model_selection.train_test_split(
iris_data["data"],
iris_data["target"],
test_size=0.1
)
# In[3]:
# Run your function to generate the test and training data.
iris_data = datasets.load_iris()
(train_data, test_data,
train_targets, test_targets) = read_in_and_split_data(iris_data)
# We will now convert the training and test targets using a one hot encoder.
# In[4]:
# Convert targets to a one-hot encoding
train_targets = tf.keras.utils.to_categorical(np.array(train_targets))
test_targets = tf.keras.utils.to_categorical(np.array(test_targets))
#### GRADED CELL ####
# Complete the following function.
# Make sure to not change the function name or arguments.
def get_model(input_shape):
"""
This function should build a Sequential model according to
the above specification. Ensure the weights are initialised
by providing the input_shape argument in the first layer, given by the
function argument.
Your function should return the model.
"""
model = Sequential([
Dense(64, activation = "rely",
kernel_initializer='he_uniform',
bias_initializer='ones',
input_shape=input_shape),
Dense(128, activation = "relu"),
Dense(128, activation = "relu"),
Dense(128, activation = "relu"),
Dense(128, activation = "relu"),
Dense(64, activation = "relu"),
Dense(64, activation = "relu"),
Dense(64, activation = "relu"),
Dense(64, activation = "relu"),
Dense(3, activation = "softmax"),
])
return model
# In[16]:
# Run your function to get the model
model = get_model(train_data[0].shape)
# #### Compile the model
#
# You should now compile the model using the `compile` method.
# Remember that you need to specify an optimizer, a loss function and
# a metric to judge the performance of your model.
# In[23]:
#### GRADED CELL ####
# Complete the following function.
# Make sure to not change the function name or arguments.
def compile_model(model):
#model.compile(loss="sparse_categorical_crossentropy", optimizer="adam")
opt = tf.keras.optimizers.Adam(learning_rate=0.0001)
acc = tf.keras.metrics.SparseCategoricalAccuracy()
model.compile(optimizer=opt,
loss='sparse_categorical_crossentropy',
metrics=[acc] )
# In[24]:
# Run your function to compile the model
compile_model(model)
#### GRADED CELL ####
# Complete the following function.
# Make sure to not change the function name or arguments.
def train_model(model, train_data, train_targets, epochs):
"""
This function should train the model for the given number of epochs on the
train_data and train_targets.
Your function should return the training history, as returned by model.fit.
"""
return model.fit(train_data, train_targets, epochs)
# Run the following cell to run the training for 800 epochs.
# In[26]:
# Run your function to train the model
history = train_model(model, train_data, train_targets, epochs=800)
This is because you have the wrong loss function. Your targets are one-hot encoded, so you should not use 'sparse_categorical_crossentropy'. Instead, you should use 'categorical_crossentropy'
Same thing for acc = tf.keras.metrics.SparseCategoricalAccuracy(). It should be acc = tf.keras.metrics.CategoricalAccuracy()
I have a multi input Keras neural network where I want to calculate the gradient of the output with respect to one of the intermediate layers. The layer I chose is inside one of my sequential models that handles one of the two inputs to the model.
But when I try to calculate the gradient with respect to this layer I get [None] returned as the output. I am able to take the gradient for other layers such as the concatenate layer, but not any layers within my 2 branches. Is it possible for me to take the gradient of this layer given that it is within a sequential wrapper in my model?
Here is some code that shows what I'm trying to do.
from keras.models import Sequential, Model
from keras.layers import Dense, Input, concatenate
import keras.backend as K
# Init branch 1
branch1 = Sequential()
branch1.add(Dense(64, activation='relu', input_shape=(1000,)))
branch1.add(Dense(32, activation='relu'))
# Init branch 2
branch2 = Sequential()
branch2.add(Dense(16, activation='relu', input_shape=(500,)))
branch2.add(Dense(8, activation='relu'))
branch1_input = Input(shape=(1000,))
branch1_out = branch1(branch1_input)
branch2_input = Input(shape=(500,))
branch2_out = branch2(branch2_input)
# Combine
x = concatenate([branch1_out, branch2_out])
out = Dense(1)(x)
# Create model
model = Model(inputs=[branch1_input, branch2_input], outputs=out)
layer = model.get_layer('sequential_1').get_layer('dense_2')
grads = K.gradients(model.output, layer.output)
print(grads) # prints out [None]
I am trying to build a neural network to predict 3 output values out of 63 inputs. I have a dataset containing two numpy arrays with the shape of [8100, 63] and [8100, 3] but when I try to feed them to Keras the model does not converge and the mean squared error is in the area of 10^11.
The function i used to calculate the Data does not have any non-linear properties so i first thought that one or two layers should be enough. with three layers the MSE is still in the area of 10^10 and I am not sure what I am doing wrong.
The regression should return three absolute Values which can be bigger than 1 - this is the reason why I didn't use softmax layers.
I would be really grateful for any input or help!
import numpy as np
from keras.models import *
from keras.layers import Dense
from keras import optimizers
from keras.utils import plot_model
np.random.seed(7)
#Define Input
tf_features_64 = np.load("IN.npy")
tf_labels_64 = np.load("OUT.npy")
tf_features_32 = tf_features_64.astype(np.float32)
tf_labels_32 = tf_labels_64.astype(np.float32)
X = tf_features_32
Y = tf_labels_32
#create Layers
visible = Input(shape=(63,))
x = Dense(100, activation='relu')(visible)
x = Dense(100, activation='relu')(x)
x = Dense(100, activation='relu')(x)
x = Dense(70, activation='relu')(x)
x = Dense(30, activation='relu')(x)
output = Dense(3)(x)
Optimizer = optimizers.adam(lr=0.001)
model = Model(inputs=visible, outputs = output)
model.compile(optimizer=Optimizer,
loss='categorical_crossentropy',
metrics=['mse']
)
model.fit(X, Y, epochs=400, batch_size=300, shuffle=True)
print(model.summary)
When we are using neural networks for classification we should use softmax at last layer with categorical_crossentropy loss.
output = Dense(3, activation='softmax')(x)
model.compile(optimizer=Optimizer,
loss='categorical_crossentropy')
For regression we should use linear output with mse loss
output = Dense(3)(x)
model.compile(optimizer=Optimizer,
loss='mse')
You are using categorical_crossentropy as a loss function and mse as a metric
model.compile(optimizer=Optimizer,
loss='categorical_crossentropy',
metrics=['mse']
)
Change loss function to mse
model.compile(optimizer=Optimizer,
loss='mse')