How To Train Regression Model With multiple 3D Array? - python

I want to train my regression model with 3D array? how can I do it in Python? can you please guide me. Actually I want to predict regression value from giving input of multiple 3D arrays. Is it possible to predict just single float number from multiple 3d arrays?. Thanks
train.model((x1,x2,x3..xN), y-value).
where x1,x2,..xN are 3D array.
and Y is output just single float number.

The key point is to reshape your 3D samples to flat 1D samples. The following example code uses tf.reshape to reshape input before feeding to a regular dense network for regression to a single value output by tf.identity (no activation).
%tensorflow_version 2.x
%reset -f
import tensorflow as tf
from tensorflow.keras import *
from tensorflow.keras.models import *
from tensorflow.keras.layers import *
from tensorflow.keras.callbacks import *
class regression_model(Model):
def __init__(self):
super(regression_model,self).__init__()
self.dense1 = Dense(units=300, activation=tf.keras.activations.relu)
self.dense2 = Dense(units=200, activation=tf.keras.activations.relu)
self.dense3 = Dense(units=1, activation=tf.identity)
#tf.function
def call(self,x):
h1 = self.dense1(x)
h2 = self.dense2(h1)
u = self.dense3(h2) # Output
return u
if __name__=="__main__":
inp = [[[1],[2],[3],[4]], [[3],[3],[3],[3]]] # 2 samples of whatever shape
exp = [[10], [12]] # Regress to sums for example'
inp = tf.constant(inp,dtype=tf.float32)
exp = tf.constant(exp,dtype=tf.float32)
NUM_SAMPLES = 2
NUM_VALUES_IN_1SAMPLE = 4
inp = tf.reshape(inp,(NUM_SAMPLES,NUM_VALUES_IN_1SAMPLE))
model = regression_model()
model.compile(loss=tf.losses.MeanSquaredError(),
optimizer=tf.optimizers.Adam(1e-3))
model.fit(x=inp,y=exp, batch_size=len(inp), epochs=100)
print(f"\nPrediction from {inp}, will be:")
print(model.predict(x=inp, batch_size=len(inp), steps=1))
# EOF

Related

In model.fit in tf.keras, is there a way to pass each sample in a batch n times?

I am trying to write a custom loss function for a model that utilizes Monte Carlo (MC) dropout. I want the model to run through each sample in a batch n times before feeding the predictions to the loss function. A current toy code is shown below. The model has 24 inputs and 10 outputs with 5000 training samples.
import numpy as np
import tensorflow as tf
X = np.random.rand(5000,24)
y = np.random.rand(5000,10)
def MC_Loss(y_true,y_pred):
mu = tf.math.reduce_mean(y_pred,axis=0)
#error = tf.square(y_true-mu)
error = tf.square(y_true-y_pred)
var = tf.math.reduce_variance(y_pred,axis=0)
return tf.math.divide(error,var)/2 + tf.math.log(var)/2 + tf.math.log(2*np.pi)/2
input_layer = tf.keras.layers.Input(shape=(X.shape[1],))
hidden_layer = tf.keras.layers.Dense(units=100,activation='elu')(input_layer)
do_layer = tf.keras.layers.Dropout(rate=0.20)(hidden_layer,training=True)
output_layer = tf.keras.layers.Dense(units=10,activation='sigmoid')(do_layer)
model = tf.keras.models.Model(input_layer,output_layer)
model.compile(loss=MC_Loss,optimizer='Adam')
model.fit(X,y,epochs=100,batch_size=128,shuffle=True)
The current shape of y_true and y_pred are (None,10) with None being the batch_size. I want to be able to have n values for each sample in the batch, so I can get the mean and standard deviation for each sample to use in the loss function. I want these value, because the mean and standard deviation should be unique to each sample, not taken across all samples in a batch. The current shape of mu and sigma are (10,) and I would want them to be (None,10) which would mean y_true and y_pred have the shape (None,n,10).
How can I accomplish this?
I believe I found the solution after some experimentation. The modified code is shown below.
import numpy as np
import tensorflow as tf
n = 100
X = np.random.rand(5000,24)
X1 = np.concatenate(([X.reshape(X.shape[0],1,X.shape[1]) for _ in range(n)]),axis=1)
y = np.random.rand(5000,10)
y1 = np.concatenate(([y.reshape(y.shape[0],1,y.shape[1]) for _ in range(n)]),axis=1)
def MC_Loss(y_true,y_pred):
mu = tf.math.reduce_mean(y_pred,axis=1)
obs = tf.math.reduce_mean(y_true,axis=1)
error = tf.square(obs-mu)
var = tf.math.reduce_variance(y_pred,axis=1)
return tf.math.divide(error,var)/2 + tf.math.log(var)/2 + tf.math.log(2*np.pi)/2
input_layer = tf.keras.layers.Input(shape=(X.shape[1]))
hidden_layer = tf.keras.layers.Dense(units=100,activation='elu')(input_layer)
do_layer = tf.keras.layers.Dropout(rate=0.20)(hidden_layer,training=True)
output_layer = tf.keras.layers.Dense(units=10,activation='sigmoid')(do_layer)
model = tf.keras.models.Model(input_layer,output_layer)
model.compile(loss=MC_Loss,optimizer='Adam')
model.fit(X1,y1,epochs=100,batch_size=128,shuffle=True)
So what I am now doing is stacking the inputs and outputs about an intermediate axis, creating n identical sets of all input and output samples. While tensorflow shows a warning because the model is created without knowledge of this intermediate axis. It still trains with no issues and the shapes are as expected.
Note: since y_true now has the shape (None,n,10), you have to take the mean about the intermediate axis which gives you the true value since all n are identical.

Incompatible shape sizes using PyGAD

I'm trying to follow the tutorial given here.
This tutorial trains a Keras model using a genetic algorithm, with the PyGAD package. I'm interested in the binary classification case. My input matrix is of dimension 10000x20. Hence, I've created the following model using Keras:
input_layer = tensorflow.keras.layers.Input(20)
dense_layer1 = tensorflow.keras.layers.Dense(500, activation="relu")(input_layer)
dense_layer2 = tensorflow.keras.layers.Dense(500, activation="relu")(dense_layer1)
output_layer = tensorflow.keras.layers.Dense(1, activation="softmax")(dense_layer2)
model = tensorflow.keras.Model(inputs=input_layer, outputs=output_layer)
keras_ga = pygad.kerasga.KerasGA(model=model,
num_solutions=10)
However, when I go to run the algorithm, using ga_instance.run(), I get the error:
ValueError: Shapes (10000,) and (10000, 1) are incompatible
I can't figure out why I'm getting this error? I want my Keras model to have 2 hidden layers, each with 500 hidden nodes and 1 output node.
I think the problem is related to how each output is represented in the array. if you have a single output for 10000 instances, then this is an example of preparing the data that works with PyGAD. Its shape is (1000, 1).
numpy.random.uniform(0, 1, (1000, 1))
Here is a code that works but for a simple network architecture because, based on the fitness function you used, the fitness sometimes is NaN.
As I do not have the same data you used, I generated the input/output data randomly.
import tensorflow.keras
import pygad.kerasga
import numpy
import pygad
def fitness_func(solution, sol_idx):
global data_inputs, data_outputs, keras_ga, model
model_weights_matrix = pygad.kerasga.model_weights_as_matrix(model=model,
weights_vector=solution)
model.set_weights(weights=model_weights_matrix)
predictions = model.predict(data_inputs)
cce = tensorflow.keras.losses.CategoricalCrossentropy()
solution_fitness = 1.0 / (cce(data_outputs, predictions).numpy() + 0.00000001)
# print("solution_fitness", cce(data_outputs, predictions).numpy(), solution_fitness)
return solution_fitness
def callback_generation(ga_instance):
print("Generation = {generation}".format(generation=ga_instance.generations_completed))
print("Fitness = {fitness}".format(fitness=ga_instance.best_solution(ga_instance.last_generation_fitness)[1]))
data_inputs = numpy.random.uniform(0, 1, (1000, 20))
data_outputs = numpy.random.uniform(0, 1, (1000, 1))
# create model
from tensorflow.keras.layers import Dense, Dropout
l1_rate=1e-6
l2_rate = 1e-6
input_layer = tensorflow.keras.layers.InputLayer(20)
dense_layer1 = tensorflow.keras.layers.Dense(10, activation="relu",kernel_regularizer=tensorflow.keras.regularizers.l1_l2(l1=l1_rate, l2=l2_rate))
output_layer = tensorflow.keras.layers.Dense(1, activation="sigmoid")
model = tensorflow.keras.Sequential()
model.add(input_layer)
model.add(dense_layer1)
model.add(Dropout(0.2))
model.add(output_layer)
keras_ga = pygad.kerasga.KerasGA(model=model,
num_solutions=10)
# Run pygad
num_generations = 30
num_parents_mating = 5
initial_population = keras_ga.population_weights
ga_instance = pygad.GA(num_generations=num_generations,
num_parents_mating=num_parents_mating,
initial_population=initial_population,
fitness_func=fitness_func,
on_generation=callback_generation)
ga_instance.run()
Thanks for using PyGAD!

How to cache layer activations in Keras?

I train a NN in which first layers have fixed weights (non-trainable) in Keras.
Computation performed by these layers is pretty intensive during training. It makes sense to cache layer activations for each input and reuse them when same input data is passed on next epoch to save computation time.
Is it possible to achieve this behaviour in Keras?
You could separate your model into two different models. For example, in the following snippet x_ would correspond to your intermediate activations:
from keras.models import Model
from keras.layers import Input, Dense
import numpy as np
nb_samples = 100
in_dim = 2
h_dim = 3
out_dim = 1
a = Input(shape=(in_dim,))
b = Dense(h_dim, trainable=False)(a)
model1 = Model(a, b)
model1.compile('sgd', 'mse')
c = Input(shape=(h_dim,))
d = Dense(out_dim)(c)
model2 = Model(c, d)
model2.compile('sgd', 'mse')
x = np.random.rand(nb_samples, in_dim)
y = np.random.rand(nb_samples, out_dim)
x_ = model1.predict(x) # Shape=(nb_samples, h_dim)
model2.fit(x_, y)

Keras lambda layer wrong output size

I have 2 inputs into a lambda layer one size (2,3,) the other (3,) . The lambda layer should return an output of size 2, however when the multiply layer is executed the following error occurs:
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimensions must be equal, but are 2 and 3 for 'multiply_1/mul' (op: 'Mul') with input shapes: [?,2], [?,3].
The relevant code is below and any help would be much appreciated,thanks:
import numpy as np
from keras.models import Model
from keras import backend as K
from keras.engine.topology import Layer
from keras.layers import Dense,Input,concatenate,Lambda,multiply,add
import tensorflow as tf
import time
def weights_Fx(x):
j = x[0][:,0]
k = x[1][0]
y = j - k
return y
def sum_layer(x):
x = tf.reduce_sum(x)
return x
type1_2 = Dense(units=1, activation = 'relu',name = "one")
type1_3 = Dense(units=1,activation = 'relu',name = "two")
in1 = Input(shape=(1,))
in2 = Input(shape=(1,))
n1 = type1_2(in1)
n2 = type1_3(in2)
model = concatenate([n1,n2],axis=-1,name='merge_predicitions')
coords_in = Input(shape=(2,3,))
coords_target = Input(shape=(3,))
model2 = Lambda(weights_Fx,output_shape=(2,),name='weightsFx')([coords_in,coords_target])
model = multiply([model,model2])
model = Lambda(sum_layer)(model)
model = Model(inputs=[in1,in2,coords_in,coords_target],outputs=[model])
The issue was to do with how I was indexing the array. It is important to remember that though the data is of shape (2,3) keras will create a Tensor of shape (None,2,3) therefore to perform the operation as desired the following is needed:
y = x[0][:,:,0]-x[1][:,0]
Furthermore in "sum layer" in order to prevent the rank (number of dimensions) in the tensor being reduced by 1 the following is required:
y = K.sum(x,axis=1,keepdims=True)
Your Lambda layer is not returning output of shape 2 but it is returning output of shape 3.
Model2 shape is (,3) and not (,2) which is giving error in multiply of the model and model2
Take a look at your coords_in and coords_target shape.

Custom weighted loss function in Keras for weighing each element

I'm trying to create a simple weighted loss function.
Say, I have input dimensions 100 * 5, and output dimensions also 100 * 5. I also have a weight matrix of the same dimension.
Something like the following:
import numpy as np
train_X = np.random.randn(100, 5)
train_Y = np.random.randn(100, 5)*0.01 + train_X
weights = np.random.randn(*train_X.shape)
Defining the custom loss function
def custom_loss_1(y_true, y_pred):
return K.mean(K.abs(y_true-y_pred)*weights)
Defining the model
from keras.layers import Dense, Input
from keras import Model
import keras.backend as K
input_layer = Input(shape=(5,))
out = Dense(5)(input_layer)
model = Model(input_layer, out)
Testing with existing metrics works fine
model.compile('adam','mean_absolute_error')
model.fit(train_X, train_Y, epochs=1)
Testing with our custom loss function doesn't work
model.compile('adam',custom_loss_1)
model.fit(train_X, train_Y, epochs=10)
It gives the following stack trace:
InvalidArgumentError (see above for traceback): Incompatible shapes: [32,5] vs. [100,5]
[[Node: loss_9/dense_8_loss/mul = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](loss_9/dense_8_loss/Abs, loss_9/dense_8_loss/mul/y)]]
Where is the number 32 coming from?
Testing a loss function with weights as Keras tensors
def custom_loss_2(y_true, y_pred):
return K.mean(K.abs(y_true-y_pred)*K.ones_like(y_true))
This function seems to do the work. So, probably suggests that a Keras tensor as a weight matrix would work. So, I created another version of the loss function.
Loss function try 3
from functools import partial
def custom_loss_3(y_true, y_pred, weights):
return K.mean(K.abs(y_true-y_pred)*K.variable(weights, dtype=y_true.dtype))
cl3 = partial(custom_loss_3, weights=weights)
Fitting data using cl3 gives the same error as above.
InvalidArgumentError (see above for traceback): Incompatible shapes: [32,5] vs. [100,5]
[[Node: loss_11/dense_8_loss/mul = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](loss_11/dense_8_loss/Abs, loss_11/dense_8_loss/Variable/read)]]
I wonder what I'm missing! I could have used the notion of sample_weight in Keras; but then I'd have to reshape my inputs to a 3d vector.
I thought that this custom loss function should really have been trivial.
In model.fit the batch size is 32 by default, that's where this number is coming from. Here's what's happening:
In custom_loss_1 the tensor K.abs(y_true-y_pred) has shape (batch_size=32, 5), while the numpy array weights has shape (100, 5). This is an invalid multiplication, since the dimensions don't agree and broadcasting can't be applied.
In custom_loss_2 this problem doesn't exist because you're multiplying 2 tensors with the same shape (batch_size=32, 5).
In custom_loss_3 the problem is the same as in custom_loss_1, because converting weights into a Keras variable doesn't change their shape.
UPDATE: It seems you want to give a different weight to each element in each training sample, so the weights array should have shape (100, 5) indeed.
In this case, I would input your weights' array into your model and then use this tensor within the loss function:
import numpy as np
from keras.layers import Dense, Input
from keras import Model
import keras.backend as K
from functools import partial
def custom_loss_4(y_true, y_pred, weights):
return K.mean(K.abs(y_true - y_pred) * weights)
train_X = np.random.randn(100, 5)
train_Y = np.random.randn(100, 5) * 0.01 + train_X
weights = np.random.randn(*train_X.shape)
input_layer = Input(shape=(5,))
weights_tensor = Input(shape=(5,))
out = Dense(5)(input_layer)
cl4 = partial(custom_loss_4, weights=weights_tensor)
model = Model([input_layer, weights_tensor], out)
model.compile('adam', cl4)
model.fit(x=[train_X, weights], y=train_Y, epochs=10)

Categories

Resources