Most significant input dimensions for GPy.GPCoregionalizedRegression? - python

I have trained successfully a multi-output Gaussian Process model using an GPy.models.GPCoregionalizedRegression model of the GPy package. The model has ~25 inputs and 6 outputs.
The underlying kernel is an GPy.util.multioutput.ICM kernel consisting of an RationalQuadratic kernel GPy.kern.RatQuad and the GPy.kern.Coregionalize Kernel.
I am now interested in the feature importance on each individual output. The RatQuad kernel provides an ARD=True (Automatic Relevance Determination) keyword, which allows to get the feature importance of its output for a single output model (which is also exploited by the get_most_significant_input_dimension() method of the GPy model).
However, calling the get_most_significant_input_dimension() method on the GPy.models.GPCoregionalizedRegression model gives me a list of indices I assume to be the most significant inputs somehow for all outputs.
How can I calculate/obtain the lengthscale values or most significant features for each individual output of the model?

The problem is the model itself. The intrinsic coregionalized model (ICM) is set up such, that all outputs are determined by a shared underlying "latent" Gaussian Process. Thus, calling get_most_significant_input_dimension() on a GPy.models.GPCoregionalizationRegression model can only give you one set of input dimensions significant to all outputs together.
The solution is to use a GPy.util.multioutput.LCM model kernel, which is defined as a sum of ICM kernels with a list of individual (latent) GP kernels. It works as follows
import GPy
# Your data
# x = ...
# y = ...
# # ICM case
# kernel = GPy.util.multioutput.ICM(input_dim=x.shape[1],
# num_outputs=y.shape[1],
# kernel=GPy.kern.RatQuad(input_dim=x.shape[1], ARD=True))
# LCM case
k_list = [GPy.kern.RatQuad(input_dim=x.shape[1], ARD=True) for _ in range(y.shape[1])]
kernel = GPy.util.multioutput.LCM(input_dim=x.shape[1], num_outputs=y.shape[1],
W_rank=rank, kernels_list=k_list)
A reshaping is of the data is needed (This is also necessary for the ICM model and thus independent of the scope of this questions, see here for details)
# Reshaping data to fit GPCoregionalizedRegression
xx = reshape_for_coregionalized_regression(x)
yy = reshape_for_coregionalized_reshaping(y)
m = GPy.models.GPCoregionalizedRegression(xx, yy, kernel=kernel)
m.optimize()
After converged optimization one can call get_most_significant_input_dimension() on an individual latent GPs (here output 0).
sig_inputs_0 = m.sum.ICM0.get_most_significant_input_dimensions()
or looping over all kernels
sig_inputs = []
for part in self.gpy_model.kern.parts:
sig_inputs.append(part.get_most_significant_input_dimensions())

Related

How to get HMM working with real-valued data in Tensorflow

I'm working with a dataset that contains data from IoT devices and I have found that Hidden Markov Models work pretty well for my use case. As such, I'm trying to alter some code from a Tensorflow tutorial I've found here. The dataset contains real-values for the observed variable compared to the count data shown in the tutorial.
In particular, I believe the following needs to be changed so that the HMM has Normally distributed emissions. Unfortunately, I can't find any code on how to alter the model to have a different emission other than Poisson.
How should I change the code to emit normally distributed values?
# Define variable to represent the unknown log rates.
trainable_log_rates = tf.Variable(
np.log(np.mean(observed_counts)) + tf.random.normal([num_states]),
name='log_rates')
hmm = tfd.HiddenMarkovModel(
initial_distribution=tfd.Categorical(
logits=initial_state_logits),
transition_distribution=tfd.Categorical(probs=transition_probs),
observation_distribution=tfd.Poisson(log_rate=trainable_log_rates),
num_steps=len(observed_counts))
rate_prior = tfd.LogNormal(5, 5)
def log_prob():
return (tf.reduce_sum(rate_prior.log_prob(tf.math.exp(trainable_log_rates))) +
hmm.log_prob(observed_counts))
optimizer = tf.keras.optimizers.Adam(learning_rate=0.1)
#tf.function(autograph=False)
def train_op():
with tf.GradientTape() as tape:
neg_log_prob = -log_prob()
grads = tape.gradient(neg_log_prob, [trainable_log_rates])[0]
optimizer.apply_gradients([(grads, trainable_log_rates)])
return neg_log_prob, tf.math.exp(trainable_log_rates)
The example model assumes that emissions x are Poisson distributed with one of four rates determined by the latent variable z. Therefore it defines trainable rates (or log rates), defines the HMM with uniform initial distributions on z, transition probabilities, and observations from the Poisson distribution with log rates given by the trainable ones.
In order to change to a normal distribution, you are saying x should be Normally distributed with trainable mean and standard deviation determined by the latent variable z. Thus, you need to replace trainable_log_rates with a trainable_loc and trainable_scale and change the
observation_distribution=tfd.Poisson(log_rate=trainable_log_rates)
to
observation_distribution=tfd.Normal(loc=trainable_loc, scale=trainable_scale)
You then need to replace your rate_prior with a loc_prior and scale_prior of your choosing and use them to calculate your new log_prob function.

Tensorflow applying operations inside a model: FailedPreconditionError

Say I have CNN model that outputs N probability maps as mask the same size of the input image in a Unet like fashion. I would then want to apply for example least square fit on top of each mask to get coefficients for functions as output instead and use these to calculate my models loss.
def unet_model(...)
# init unet model
...
...
# final layer
mask_out = layers.Conv2D(output_channels, (1,1), activation='softmax')(conv9)
# start applying e.g least squares fit here
eq_list = tf.Variable((x_map, y_map, mask_out))
transp = tf.transpose(a)
...
transp would get the following error when I initialize the model. I have tested the least squares fit operations elsewhere.
FailedPreconditionError: Error while reading resource variable _AnonymousVar1423 from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/_AnonymousVar1423/N10tensorflow3VarE does not exist. name: transpose/
I have some assumptions such as that transpose cannot deal with placeholders axis for batch sizes, but am generally clueless about this.
before adding each variables I needed to make sure that x_map and y_map also is batched by expanding the dims with axis -1

How can I compare weights of different Keras models?

I've saved numbers of models in .h5 format. I want to compare their characteristics such as weight.
I don't have any Idea how I can appropriately compare them specially in the form of tables and figures.
Thanks in advance.
Weight-introspection is a fairly advanced endeavor, and requires model-specific treatment. Visualizing weights is a largely technical challenge, but what you do with that information's a different matter - I'll address largely the former, but touch upon the latter.
Update: I also recommend See RNN for weights, gradients, and activations visualization.
Visualizing weights: one approach is as follows:
Retrieve weights of layer of interest. Ex: model.layers[1].get_weights()
Understand weight roles and dimensionality. Ex: LSTMs have three sets of weights: kernel, recurrent, and bias, each serving a different purpose. Within each weight matrix are gate weights - Input, Cell, Forget, Output. For Conv layers, the distinction's between filters (dim0), kernels, and strides.
Organize weight matrices for visualization in a meaningful manner per (2). Ex: for Conv, unlike for LSTM, feature-specific treatment isn't really necessary, and we can simply flatten kernel weights and bias weights and visualize them in a histogram
Select visualization method: histogram, heatmap, scatterplot, etc - for flattened data, a histogram is the best bet
Interpreting weights: a few approaches are:
Sparsity: if weight norm ("average") is low, the model is sparse. May or may not be beneficial.
Health: if too many weights are zero or near-zero, it's a sign of too many dead neurons; this can be useful for debugging, as once a layer's in such a state, it usually does not revert - so training should be restarted
Stability: if weights are changing greatly and quickly, or if there are many high-valued weights, it may indicate impaired gradient performance, remedied by e.g. gradient clipping or weight constraints
Model comparison: there isn't a way for simply looking at two weights from separate models side-by-side and deciding "this is the better one"; analyze each model separately, for example as above, then decide which one's ups outweigh downs.
The ultimate tiebreaker, however, will be validation performance - and it's also the more practical one. It goes as:
Train model for several hyperparameter configurations
Select one with best validation performance
Fine-tune that model (e.g. via further hyperparameter configs)
Weight visualization should be mainly kept as a debugging or logging tool - as, put simply, even with our best current understanding of neural networks one cannot tell how well the model will generalize just by looking at the weights.
Suggestion: also visualize layer outputs - see this answer and sample output at bottom.
Visual example:
from tensorflow.keras.layers import Input, Conv2D, Dense, Flatten
from tensorflow.keras.models import Model
ipt = Input(shape=(16, 16, 16))
x = Conv2D(12, 8, 1)(ipt)
x = Flatten()(x)
out = Dense(16)(x)
model = Model(ipt, out)
model.compile('adam', 'mse')
X = np.random.randn(10, 16, 16, 16) # toy data
Y = np.random.randn(10, 16) # toy labels
for _ in range(10):
model.train_on_batch(X, Y)
def get_weights_print_stats(layer):
W = layer.get_weights()
print(len(W))
for w in W:
print(w.shape)
return W
def hist_weights(weights, bins=500):
for weight in weights:
plt.hist(np.ndarray.flatten(weight), bins=bins)
W = get_weights_print_stats(model.layers[1])
# 2
# (8, 8, 16, 12)
# (12,)
hist_weights(W)
Conv1D outputs visualization: (source)
To compare weights of two models, I vectorize model weights (i.e., create 1D array) for each model. Then, I calculate the percent difference between respective weights and construct a histogram of these percent differences. If all values are close to zero, there is suggestion (but not proof) that the models are practically the same. This is just one approach out of many for comparing models using their weights.
As an aside, I will note that I use this method when I want some indication that my model has converged on a global, rather than local, minimum. I will train models with several different initializations. If all the initializations result in convergence to the same weights, it suggests that the minimum is a global minimum.

Why my one-filter convolutional neural network is unable to learn a simple gaussian kernel?

I was surprised that the deep learning algorithms I had implemented did not work, and I decided to create a very simple example, to understand the functioning of CNN better. Here is my attempt of constructing a small CNN for a very simple task, which provides unexpected results.
I have implemented a simple CNN with only one layer of one filter. I have created a dataset of 5000 samples, the inputs x being 256x256 simulated images, and the outputs y being the corresponding blurred images (y = signal.convolvded2d(x,gaussian_kernel,boundary='fill',mode='same')).
Thus, I would like my CNN to learn the convolutional filter which would transform the original image into its blurred version. In other words, I would like my CNN to recover the gaussian filter I used to create the blurred images. Note: As I want to 'imitate' the convolution process such as it is described in the mathematical framework, I am using a gaussian filter which has the same size as my images: 256x256.
It seems to me quite an easy task, and nonetheless, the CNN is unable to provide the results I would expect. Please find below the code of my training function and the results.
# Parameters
size_image = 256
normalization = 1
sigma = 7
n_train = 4900
ind_samples_training =np.linspace(1, n_train, n_train).astype(int)
nb_epochs = 5
minibatch_size = 5
learning_rate = np.logspace(-3,-5,nb_epochs)
tf.reset_default_graph()
tf.set_random_seed(1)
seed = 3
n_train = len(ind_samples_training)
costs = []
# Create Placeholders of the correct shape
X = tf.placeholder(tf.float64, shape=(None, size_image, size_image, 1), name = 'X')
Y_blur_true = tf.placeholder(tf.float64, shape=(None, size_image, size_image, 1), name = 'Y_true')
learning_rate_placeholder = tf.placeholder(tf.float32, shape=[])
# parameters to learn --should be an approximation of the gaussian filter
filter_to_learn = tf.get_variable('filter_to_learn',\
shape = [size_image,size_image,1,1],\
dtype = tf.float64,\
initializer = tf.contrib.layers.xavier_initializer(seed = 0),\
trainable = True)
# Forward propagation: Build the forward propagation in the tensorflow graph
Y_blur_hat = tf.nn.conv2d(X, filter_to_learn, strides = [1,1,1,1], padding = 'SAME')
# Cost function: Add cost function to tensorflow graph
cost = tf.losses.mean_squared_error(Y_blur_true,Y_blur_hat,weights=1.0)
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer that minimizes the cost.
opt_adam = tf.train.AdamOptimizer(learning_rate=learning_rate_placeholder)
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
optimizer = opt_adam.minimize(cost)
# Initialize all the variables globally
init = tf.global_variables_initializer()
lr = learning_rate[0]
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(nb_epochs):
minibatch_cost = 0.
seed = seed + 1
permutation = list(np.random.permutation(n_train))
shuffled_ind_samples = np.array(ind_samples_training)[permutation]
# Learning rate update
if learning_rate.shape[0]>1:
lr = learning_rate[epoch]
nb_minibatches = int(np.ceil(n_train/minibatch_size))
for num_minibatch in range(nb_minibatches):
# Minibatch indices
ind_minibatch = shuffled_ind_samples[num_minibatch*minibatch_size:(num_minibatch+1)*minibatch_size]
# Loading of the original image (X) and the blurred image (Y)
minibatch_X, minibatch_Y = load_dataset_blur(ind_minibatch,size_image, normalization, sigma)
_ , temp_cost, filter_learnt = sess.run([optimizer,cost,filter_to_learn],\
feed_dict = {X:minibatch_X, Y_blur_true:minibatch_Y, learning_rate_placeholder: lr})
I have run the training on 5 epochs of 4900 samples, with a batch size equal to 5. The gaussian kernel has a variance of 7^2=49.
I have tried to initialize the filter to be learnt both with the xavier initiliazer method provided by tensorflow, and with the true values of the gaussian kernel we actually would like to learn. In both cases, the filter that is learnt results too different from the true gaussian one as it can be seen on the two images available at https://github.com/megalinier/Helsinki-project.
By examining the photos it seems like the network is learning OK, as the predicted image is not so far off the true label - for better results you can tweak some hyperparams but that is not the case.
I think what you are missing is the fact that different kernels can get quite similar results since it is a convolution.
Think about it, you are multiplying some matrix with another, and then summing all the results to create a new pixel. Now if the true label sum is 10, it could be a results of 2.5 + 2.5 + 2.5 + 2.5 and -10 + 10 + 10 + 0.
What I am trying to say, is that your network could be learning just fine, but you will get a different values in the conv kernel than the filter.
I think this would better serve as a comment as it's somewhat speculative, but it's too long...
Hard to say what exactly is wrong but there could be multiple culprits here. For one, squared error provides a weak signal in the case that target and prediction are already quite similar -- and while the xavier-initalized filter looks quite bad, the predicted (filtered) image isn't too far off the target. You could experiment with other metrics such as absolute error (e.g. 1-norm instead of 2-norm).
Second, adding regularization should help, i.e. add a weight penalty to the loss function to encourage the filter values to become small where they are not needed. As it is, what I suppose happens is: The random values in the filter average out to about 0, leading to a similar "filtering" effect as if they were actually all 0. As such, the learning algorithm doesn't have much incentive to actually pull them to 0. By adding a weight penalty, you provide this incentive.
Third, it could just be Adam messing up. It is known to provide "strange" non-optimal solutions in some very simple (e.g. convex) problems. Maybe try default Gradient Descent with learning rate decay (and possibly momentum).

Optimizing subgraph of large graph - slower than optimizing subgraph by itself

I have a very large tensorflow graph, and two sets of variables: A and B. I create two optimizers:
learning_rate = 1e-3
optimizer1 = tf.train.AdamOptimizer(learning_rate).minimize(loss_1, var_list=var_list_1)
optimizer2 = tf.train.AdamOptimizer(learning_rate).minimize(loss_2, var_list=var_list_2)
The goal here is to iteratively optimize variables 1 and variables 2. The weights from variables 2 are used in the computation of loss 1, but they're not trainable when optimizing loss 1. Meanwhile, the weights from variables 1 are not used in optimizing loss 2 (I would say this is a key asymmetry).
I am finding, weirdly, that this optimization for optimizer2 is much, much slower (2x) than if I were to just optimize that part of the graph by itself. I'm not running any summaries.
Why would this phenomenon happen? How could I fix it? I can provide more details if necessary.
I am guessing that this is a generative adversarial network given by the relation between the losses and the parameters. It seems that the first group of parameters are the generative model and the second group make up the detector model.
If my guesses are correct, then that would mean that the second model is using the output of the first model as its input. Admittedly, I am much more informed about PyTorch than TF. There is a comment which I believe is saying that the first model could be included in the second graph. I also think this is true. I would implement something similar to the following. The most important part is just creating a copy of the generated_tensor with no graph:
// An arbitrary label
label = torch.Tensor(1.0)
// Treat GenerativeModel as the model with the first list of Variables/parameters
generated_tensor = GenerativeModel(random_input_tensor)
// Treat DetectorModel as the model with the second list of Variables/parameters
detector_prediction = DetectorModel(generated_tensor)
generated_tensor_copy = torch.tensor(generated_tensor, requires_grad=False)
detector_prediction_copy = DetectorModel(generated_tensor_copy)
//This is for optimizing the first model, but it has the second model in its graph
// which is necessary.
loss1 = loss_func1(detector_prediction, label)
// This is for optimizing the second model. It will not have the first model in its graph
loss2 = loss_func2(detector_prediction_copy, label)
I hope this is helpful. If anyone knows how to do this in TF, that would probably be very invaluable.

Categories

Resources