Jacobian of tensorflow model - python

I am trying to calculate jacobian matrix from my neural network trained for autoregression.There are 9 input variables to the model and it predicts 3 variables as output.
Input shape=(1,9)
Output shape=(1,3)
And I can calculate the normal jacobian matrix of shape (3,9) from the current code of tensorflow.
I have the representation for the matrix that I can currently calculate attached here
Jacobian Matrix.
My issue is this jacobian calculation is very slow and I don't want to calculate all the jacobians and only those jacobians at the place marked in the above image.
I have the code snippets relevant to this issue.This jacobian for is used for extended kalman filter.
Can some one help me figure out how can I do this in tensorflow.
Code for jacobian calculation
def jacobian_tensorflow(self,verbose=False):
jacobian_matrix = []
it = tqdm(range(self.output_size)) if verbose else range(self.output_size)
for o in it:
grad_func = tf.gradients(self.nn_model.output[:, o], self.nn_model.input)
gradients = sess.run(grad_func, feed_dict={self.nn_model.input: self.pred_x.reshape((1, self.pred_x.size))})
jacobian_matrix.append(gradients[0][0,:])
return np.array(jacobian_matrix)
Code of my neural networks
input_window = Input(shape=(deg_order * 3,))
x = Dense(90, activation='tanh')(input_window)
x = Dense(60, activation='tanh')(x)
x = Dense(30, activation='tanh')(x)
x = Dense(15, activation='tanh')(x)
output = Dense(3, activation='tanh')(x)
autoencoder_model = Model(input_window, output)
autoencoder_model.compile(optimizer='adam', loss=tf.keras.metrics.mean_squared_error)
autoencoder_model.fit(x_train, y_train, epochs=epochs,
shuffle=True,
validation_data=(x_validate, y_validate))
To explain clearly I have added this image of what jacobian i want to calculate.
In image o/p means the output variable and i/p is the variable on the input side. The number are just for position of the variables on the input and output side. The numbers in layers are the neurons in that hidden layer
Jacobian I want to calculate from the neural network

Related

How to differentiate a gradient in Pytorch

I'm trying to differentiate a gradient in PyTorch. I found this link but can't get it to work.
My code looks as follows:
import torch
from torch.autograd import grad
import torch.nn as nn
import torch.optim as optim
class net_x(nn.Module):
def __init__(self):
super(net_x, self).__init__()
self.fc1=nn.Linear(2, 20)
self.fc2=nn.Linear(20, 20)
self.out=nn.Linear(20, 4)
def forward(self, x):
x=self.fc1(x)
x=self.fc2(x)
x=self.out(x)
return x
nx = net_x()
r = torch.tensor([1.0,2.0])
nx(r)
>>>tensor([-0.2356, -0.7315, -0.2100, -0.6741], grad_fn=<AddBackward0>)
But when I try to differentiate the function with respect to the first parameter
grad(nx, r[0])
I get the error
TypeError: 'net_x' object is not iterable
Update
Trying to extend this to tensors:
For some reason the gradient is the same for all inputs.
a = torch.rand((8,2), requires_grad=True)
s = []
s_t = []
for input_tensor in a:
output_tensor = nx(input_tensor)
s.append(output_tensor[0])
s_t_value = grad(output_tensor[0], input_tensor)[0][0]
s_t.append(s_t_value)
print(s_t)
But the output is:
[tensor(-0.1326), tensor(-0.1326), tensor(-0.1326), tensor(-0.1326), tensor(-0.1326), tensor(-0.1326), tensor(-0.1326), tensor(-0.1326)]
First thing to change if you want to have the gradients with respect to r is to set the requires_grad flag to True for this tensor :
nx = net_x()
r = torch.tensor([1.0,2.0], requires_grad=True)
Then, as explained in autograd documentation, grad computes the gradients of oputputs with respect to the inputs, so you need to save the output of the model :
y = nx(r)
Now you can compute the gradients with respect to r. But there is one last issue : grad only knows how to propagate gradients from a scalar tensor, which y is not. So you need to compute the gradients with respect to each coordinate :
for x in y:
print(grad(x, r, retain_graph=True))
or equivalently:
for i in range(y.shape[0]):
# prints the vector (dy_i/dr_0, dy_i/dr_1, ... dy_i/dr_n)
print(grad(y[i], r, retain_graph=True))
You need to retain_graph because without this flag, the computational graph is cleared after the first gradient propagation. And there you have it, the derivative of each coordinate of nx(r) with respect to r !
To answer your question in the comments :
Not an error, it's normal. So you have a batched input of size (B, 2), with B = 8. You get a batched output of shape (B, 4). Now, for each vector of the batched output, for each coordinate of this vector, you can compute the derivative with respect to the batched input, which will yield a gradient of size (B,2), like that :
for b in y: # There a B vectors b of shape (4)
for x in b: # There are 4 coordinates
# This prints a tensor of shape (B, 2)
print(grad(x, r, retain_graph=True))
Now remember the way batches work : all batches are computed together to harvest the power of GPU, but they are actually completely independant. So al b vectors are actually results of the network from different inputs. Which means, the gradient of the i-th vector b with respect to the j-th vector of the input must be 0 if i!=j. Does that make sense ? It's like computing f(x,y) = (x^2, y^2). The derivative of y^2 with respect to x is obviously 0 ! Well consider x and y to be two samples from one batch, and you have you explaination for why there are a lot of 0 in your results.
A last sample of code to make it even clearer :
inputs = [torch.randn(1, 2, requires_grad=True) for i in range(8)]
r = torch.cat(inputs) # shape : (8, 2)
y = nx(r) # shape : (8, 4)
for i in range(len(y)):
print(f"Gradients of y[{i}] wrt r[{i}]")
for x in y[i]:
# prints a tensor of size (2)
print(grad(x, inputs[i], retain_graph=True))
On to why all the gradients are the same. This is because your neural network is completely linear. You have 3 nn.Linear layers, and no non-linear activation function (as a consequence, this is literally equivalent to a network with only one layer). One property of linear layers is that their gradient is constant : d(alpha*x)/dx = alpha (independant of x). Therefore the gradients will be identical along all dimensions. Just add non-linear activation layers like sigmoids and this behavior will not happen again.

Using Gradient Tape for Jacobian of LSTM model - Python

I am building a sequence to one model prediction using LSTM. My data has 4 input variables and 1 output variable which needs to be predicted. The data is a time series data. The total length of the data is 38265 (total number of timesteps). The total data is in a Data Frame of size 38265 *5
I want to use the previous 20 timesteps data of the 4 input variables to make prediction of my output variable. I am using the below code for this purpose.
model = Sequential()
model.add(LSTM(units = 120, activation ='relu', return_sequences = False,input_shape =
(train_in.shape[1],5)))
model.add(Dense(100,activation='relu'))
model.add(Dense(50,activation='relu'))
model.add(Dense(1))
I want to calculate the Jacobian of the output variable w.r.t the LSTM model function using tf.Gradient Tape .. Can anyone help me out with this??
The solution to segregate the Jacobian of the output with respect to the LSTM input can be done as follows:
Using tf.GradientTape(), we can compute the Jacobian arising from the gradient flow.
However for getting the Jacobian , the input needs to be in the form of tf.EagerTensor which is usually available when we want to see the Jacobian of the output (after executing y=model(x)). The following code snippet shares this idea:
#Get the Jacobian for each persistent gradient evaluation
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(2,activation='relu'))
model.add(tf.keras.layers.Dense(2,activation='relu'))
x = tf.constant([[5., 6., 3.]])
with tf.GradientTape(persistent=True,watch_accessed_variables=True) as tape:
# Forward pass
tape.watch(x)
y = model(x)
loss = tf.reduce_mean(y**2)
print('Gradients\n')
jacobian_wrt_loss=tape.jacobian(loss,x)
print(f'{jacobian_wrt_loss}\n')
jacobian_wrt_y=tape.jacobian(y,x)
print(f'{jacobian_wrt_y}\n')
But for getting intermediate outputs ,such as in this case, there have been many samples which use Keras. When we separate the outputs coming out from model.layers.output, we get the type to be a Keras.Tensor instead of an EagerTensor.
However for creating the Jacobian, we need the Eager Tensor. (After many failed attempts with #tf.function wrapping as eager execution is already present in TF>2.0)
So alternatively, an auxiliary model can be created with the layers required (in this case, just the Input and LSTM layers).The output of this model will be a tf.EagerTensor which will be useful for the Jacobian tensor creation. The following has been shown in this snippet:
#General Syntax for getting jacobians for each layer output
import numpy as np
import tensorflow as tf
tf.executing_eagerly()
x=tf.constant([[15., 60., 32.]])
x_inp = tf.keras.layers.Input(tensor=tf.constant([[15., 60., 32.]]))
model=tf.keras.Sequential()
model.add(tf.keras.layers.Dense(2,activation='relu',name='dense_1'))
model.add(tf.keras.layers.Dense(2,activation='relu',name='dense_2'))
aux_model=tf.keras.Sequential()
aux_model.add(tf.keras.layers.Dense(2,activation='relu',name='dense_1'))
#model.compile(loss='sparse_categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
with tf.GradientTape(persistent=True,watch_accessed_variables=True) as tape:
# Forward pass
tape.watch(x)
x_y = model(x)
act_y=aux_model(x)
print(x_y,type(x_y))
ops=[layer.output for layer in model.layers]
# ops=[layer.output for layer in model.layers]
# inps=[layer.input for layer in model.layers]
print('Jacobian of Full FFNN\n')
jacobian=tape.jacobian(x_y,x)
print(f'{jacobian[0]}\n')
print('Jacobian of FFNN with Just first Dense\n')
jacobian=tape.jacobian(act_y,x)
print(f'{jacobian[0]}\n')
Here I have used a simple FFNN consisting of 2 Dense layers, but I want to evaluate w.r.t the output of the first Dense layer. Hence I created an auxiliary model having just 1 Dense layer and determined the output of the Jacobian from it.
The details can be found here.
With the help from #Abhilash Majumder, I have done it this way. I am posting it here so that it might help someone in the future.
import numpy as np
import pandas as pd
import tensorflow as tf
tf.compat.v1.enable_eager_execution() #This will enable eager execution which is must.
tf.executing_eagerly() #check if eager execution is enabled or not. Should give "True"
data = pd.read_excel("FileName or Location ")
#My data is in the from of dataframe with 127549 rows and 5 columns(127549*5)
a = data[:20] #shape is (20,5)
b = data[50:70] # shape is (20,5)
A = [a,b] # making a list
A = np.array(A) # convert into array size (2,20,5)
At = tf.convert_to_tensor(A, np.float32) #convert into tensor
At.shape # TensorShape([Dimension(2), Dimension(20), Dimension(5)])
model = load_model('EKF-LSTM-1.h5') # Load the trained model
# I have a trained model which is shown in the question above.
# Output of this model is a single value
with tf.GradientTape(persistent=True,watch_accessed_variables=True) as tape:
tape.watch(At)
y1 = model(At) #defining your output as a function of input variables
print(y1,type(y1)
#output
tf.Tensor([[0.04251503],[0.04634088]], shape=(2, 1), dtype=float32) <class
'tensorflow.python.framework.ops.EagerTensor'>
jacobian=tape.jacobian(y1,At) #jacobian of output w.r.t both inputs
jacobian.shape
Outupt
TensorShape([Dimension(2), Dimension(1), Dimension(2), Dimension(20), Dimension(5)])
Here I calculated Jacobian w.r.t 2 inputs each of size (20,5). If you want to calculate w.r.t to only one input of size (20,5), then use this
jacobian=tape.jacobian(y1,At[0]) #jacobian of output w.r.t only 1st input in 'At'
jacobian.shape
Output
TensorShape([Dimension(1), Dimension(1), Dimension(1), Dimension(20), Dimension(5)])
For those looking to compute the Jacobian over a series of inputs and outputs that are independent of each other for input[i], output[j], i != j, consider the batch_jacobian method.
This will reduce the number of dimensions in your computed Jacobian tensor by one and could be the difference between running out of memory and not.
See: batch_jacobian in the TensorFlow GradientTape docs.

Calculate Jacobian Matrix of LSTM Model - Python

I have a trained LSTM model with 1 LSTM Layer and 3 Dense layers. I am using it for a sequence to One prediction. I have 4 input variables and 1 output variable. I am using the values of the last 20 timesteps to predict the next value of my output variable. The architecture of the model is shown below
model = Sequential()
model.add(LSTM(units = 120, activation ='relu', return_sequences = False,input_shape =
(train_in.shape[1],5)))
model.add(Dense(100,activation='relu'))
model.add(Dense(50,activation='relu'))
model.add(Dense(1))
The shapes of training input and training output are as shown below
train_in.shape , train_out.shape
((89264, 20, 5), (89264,))
I want to calculate the jacobian matrix for this model.
Say, Y = f(x1,x2,x3,x4) is the representation of the above neural network where:
Y -- Output variable of the trained model, f -- Is the function representing the Model; x1,x2,x3,x4 --input parameters.
How can I calculate the Jacobian Matrix?? Please share your thoughts on this. Also any valuable references if you know any.
Thank you :)
you might want to take a look at tf.GradientTape in tensorflow. Gradient tape is very simple way to auto-differentiate your computation. And the link has some basic example.
However your model is already quite big. If you have n parameters, your jacobian will have n*n values. I believe your model probably already has more than 10000 parameters. You might need to make it smaller.
I found a way to get the Jacobian matrix for LSTM model output with respect to the input. I am posting it here so that it might help someone in the future. Please share if there is any better or more simple way to do the same
import numpy as np
import pandas as pd
import tensorflow as tf
tf.compat.v1.enable_eager_execution() #This will enable eager execution which is must.
tf.executing_eagerly() #check if eager execution is enabled or not. Should give "True"
data = pd.read_excel("FileName or Location ")
#My data is in the from of dataframe with 127549 rows and 5 columns(127549*5)
a = data[:20] #shape is (20,5)
b = data[50:70] # shape is (20,5)
A = [a,b] # making a list
A = np.array(A) # convert into array size (2,20,5)
At = tf.convert_to_tensor(A, np.float32) #convert into tensor
At.shape # TensorShape([Dimension(2), Dimension(20), Dimension(5)])
model = load_model('EKF-LSTM-1.h5') # Load the trained model
# I have a trained model which is shown in the question above.
# Output of this model is a single value
with tf.GradientTape(persistent=True,watch_accessed_variables=True) as tape:
tape.watch(At)
y1 = model(At) #defining your output as a function of input variables
print(y1,type(y1)
#output
tf.Tensor([[0.04251503],[0.04634088]], shape=(2, 1), dtype=float32) <class
'tensorflow.python.framework.ops.EagerTensor'>
jacobian=tape.jacobian(y1,At) #jacobian of output w.r.t both inputs
jacobian.shape
Outupt
TensorShape([Dimension(2), Dimension(1), Dimension(2), Dimension(20), Dimension(5)])
Here I calculated Jacobian w.r.t 2 inputs each of size (20,5). If you want to calculate w.r.t to only one input of size (20,5), then use this
jacobian=tape.jacobian(y1,At[0]) #jacobian of output w.r.t only 1st input in 'At'
jacobian.shape
Output
TensorShape([Dimension(1), Dimension(1), Dimension(1), Dimension(20), Dimension(5)])

LSTM doesn't learn to add random numbers

I was trying to do a pretty simple thing, train an LSTM that picks a sequence of random numbers and outputs the sum of them. But after some hours without converging I decided to ask here which of my premises doesn't work.
The idea is simple:
I generate a training set of sequences of some sequence length of random numbers and label them with the sum of them (numbers are drawn from a normal distribution)
I use an LSTM with an RMSE loss for predicting the output, the sum of these numbers, given the sequence input
Intuitively the LSTM should learn to set the weight of the input gate to 1 (bias 0) the weights of the forget gate to 0 (bias 1) and the weight to the output gate to 1 (bias 0) and learn to add these numbers, but it doesn't. I pasting the code I use, I tried with different learning rates, optimizers, batching, observed the gradients and the outputs and don't find the exact reason why is failing.
Code for generating sequences:
import tensorflow as tf
import numpy as np
tf.enable_eager_execution()
def generate_sequences(n_samples, seq_len):
total_shape = n_samples*seq_len
random_values = np.random.randn(total_shape)
random_values = random_values.reshape(n_samples, -1)
targets = np.sum(random_values, axis=1)
return random_values, targets
Code for training:
n_samples = 100000
seq_len = 2
lr=0.1
epochs = n_samples
batch_size = 1
input_shape = 1
data, targets = generate_sequences(n_samples, seq_len)
train_data = tf.data.Dataset.from_tensor_slices((data, targets))
output = tf.keras.layers.RNN(tf.keras.layers.LSTMCell(1, dtype='float64', recurrent_activation=None, activation=None), input_shape=(batch_size, seq_len, input_shape))
iterator = train_data.batch(batch_size).make_one_shot_iterator()
optimizer = tf.train.AdamOptimizer(lr)
for i in range(epochs):
my_inp, target = iterator.get_next()
with tf.GradientTape(persistent=True) as tape:
tape.watch(my_inp)
my_out = output(tf.reshape(my_inp, shape=(batch_size,seq_len,1)))
loss = tf.sqrt(tf.reduce_sum(tf.square(target - my_out)),1)/batch_size
grads = tape.gradient(loss, output.trainable_variables)
optimizer.apply_gradients(zip(grads, output.trainable_variables),
global_step=tf.train.get_or_create_global_step())
I also has a conjecture that this a theoretical problem (If we sum different random values drawn form a normal distribution then the output is not in the [-1, 1] range and the LSTM due to the tanh activations can't learn it. But changing them doesn't improved the performance (changed to linear in the example code).
EDIT:
Set activations to linear, I realised that the tanh() squashes the values.
SOLVED:
The problem was actually the tanh() of the gates and recurrent states which was squashing my outputs and not allowing them to grow by adding up the summands. Putting all activations to linear works pretty fine.

Compute entropy for continuous values in python

I want to compute entropy for two matrices (inputs). I want the output entropy still using having the matrix shape.
For example::
import numpy as np
def entropy(x, y):
probs = np.mean((x, y), axis=0)
p = probs.astype(np.float32)
return (-p * np.log2(p))
inp1 = np.random.random([5,4])
inp2 = np.random.random([5,4])
inp1_flatt = inp1.reshape([-1])
inp2_flatt = inp2.reshape([-1])
combine_out = entropy(inp1_flatt, inp2_flatt).reshape([5,4])
In the entropy() function, I think I have a problem with computing the posterior probabilities (probs).
How can I compute the posterior probabilities in the correct way ?
EDIT::
These two inputs suppose to be an regression outputs of Neural Network. I want to save their shape. since the input of the entropy (output of the neural network) has shape [5,4], I want the entropy output shape is [5,4]. I want to do something like combine sources using entropy (joint entropy method for continuous values)

Categories

Resources