In Python, I was attempting to dive into the GPy library for estimating Gaussian Process models, when I encountered a stumbling block early on with simple plotting.
For my data, I generated a simple sine wave with a squared growth rate added in midway, and GPy successfully estimated the initial model.
Data generation:
## Generating data for regression
# First, regular sine wave + normal noise
x = np.linspace(0,40, num=300)
noise1 = np.random.normal(0,0.3,300)
y = np.sin(x) + noise1
# Second, an upward trending starting midway, with its own noise as well
temp = x[150:]
noise2 = 0.004*temp**2 + np.random.normal(0,0.1,150)
y[150:] = y[150:] + noise2
plt.plot(x, y)
Initial model:
## Pre-processing
X = np.expand_dims(x, axis=1)
Y = np.expand_dims(y, axis=1)
## Model
kernel = GPy.kern.RBF(input_dim=1, variance=1., lengthscale=1.)
model1 = GPy.models.GPRegression(X, Y, kernel)
## Plotting
fig = model1.plot()
GPy.plotting.show(fig, filename='basic_gp_regression_notebook')
However, this model is mis-specified, since the data was only created using sin(X) and X^2, and not just X, so I create the next model:
X_all = np.hstack((np.sin(X), np.square(X)))
model2 = GPy.models.GPRegression(X_all, Y, kernel)
fig = model2.plot()
GPy.plotting.show(fig, filename='basic_correct_gp_regression_notebook')
However, now, I am getting plotting errors,
Invalid value of type 'builtins.str' received for the 'size' property of scatter.marker Received value: '5'
I assume this is because the plot does not know to use "X" as the x-axis, having been supplied only sin(X) and X^2.
How could I fix this?
Related
In the residual plot resulting from the below code, there is substantial drop in values around the halfway point
I would like to help visualise this for those less statistically inclined by plotting 2 average lines of the residual plot
one from x(0, 110)
and the second from x(110, 240)
Here is the code
FINAL LINEAR MODEL
x = merged[['Imp_Col_LNG', 'AveSH_LNG']].values
y = merged['Unproductive_LNG'].values
from sklearn.linear_model import LinearRegression
reg = LinearRegression()
reg.fit(x,y)
# plt.scatter(x, y)
yp=reg.predict(x)
# plt.plot(xp,yp)
# plt.text(x.max()*0.7,y.max()*0.1,'$R^2$ =
{score:.4f}'.format(score=reg.score(x,y)))
# plt.show()
plt.scatter(yp, y)
s = yp.argsort()
plt.plot(yp[s], yp[s],color='k',ls='--')
from scipy.stats import norm
ub = yp + norm.ppf(0.5+0.95/2) * res.std(ddof=1)
lb = yp - norm.ppf(0.5+0.95/2) * res.std(ddof=1)
plt.plot(yp[s], ub[s],color='k',ls='--')
plt.plot(yp[s], lb[s],color='k',ls='--')
plt.text(x.max()*0.7,y.max()*0.1,'$R^2$ =
{score:.4f}'.format(score=reg.score(x,y)))
plt.xlabel('Predicted Values')
plt.ylabel('Observed Values')
plt.title('LNG_Shuffles')
plt.show()
RESIDUAL PLOTS
res = pd.Series(y - yp)
checkresiduals(res)
plt.plot(res)
Since we are trying to plot the average of the residuals from (0, 110) and (110, 240), we first have to calculate the averages for each section.
Here, res stores the residual data in the form of a pd.Series object. To get the array information from it, we can use the to_numpy method of the pd.Series objects.
res_data = res.to_numpy()
Now, let's calculate the mean for each part.
first_average = res_data[:110].mean()
second_average = res_data[110:].mean()
Now, since we are going to plot this over two different ranges, we will have to convert these to numpy arrays before plotting.
plt.plot(np.arange(110), np.ones(110) * first_average)
plt.plot(np.arange(110, 240), np.ones(130) * second_average)
This should give you the piecewise residual average plot.
I wrote a Python script that uses scikit-learn to fit Gaussian Processes to some data.
IN SHORT: the problem I am facing is that while the Gaussian Processses seem to learn very well the training dataset, the predictions for the testing dataset are off, and it seems to me there is a problem of normalization behind this.
IN DETAIL: my training dataset is a set of 1500 time series. Each time series has 50 time components. The mapping learnt by the Gaussian Processes is between a set of three coordinates x,y,z (which represent the parameters of my model) and one time series. In other words, there is a 1:1 mapping between x,y,z and one time series, and the GPs learn this mapping. The idea is that, by giving to the trained GPs new coordinates, they should be able to give me the predicted time series associated to those coordinates.
Here is my code:
from __future__ import division
import numpy as np
from matplotlib import pyplot as plt
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import Matern
coordinates_training = np.loadtxt(...) # read coordinates training x, y, z from file
coordinates_testing = np.loadtxt(..) # read coordinates testing x, y, z from file
# z-score of the coordinates for the training and testing data.
# Note I am using the mean and std of the training dataset ALSO to normalize the testing dataset
mean_coords_training = np.zeros(3)
std_coords_training = np.zeros(3)
for i in range(3):
mean_coords_training[i] = coordinates_training[:, i].mean()
std_coords_training[i] = coordinates_training[:, i].std()
coordinates_training[:, i] = (coordinates_training[:, i] - mean_coords_training[i])/std_coords_training[i]
coordinates_testing[:, i] = (coordinates_testing[:, i] - mean_coords_training[i])/std_coords_training[i]
time_series_training = np.loadtxt(...)# reading time series of training data from file
number_of_time_components = np.shape(time_series_training)[1] # 100 time components
# z_score of the time series
mean_time_series_training = np.zeros(number_of_time_components)
std_time_series_training = np.zeros(number_of_time_components)
for i in range(number_of_time_components):
mean_time_series_training[i] = time_series_training[:, i].mean()
std_time_series_training[i] = time_series_training[:, i].std()
time_series_training[:, i] = (time_series_training[:, i] - mean_time_series_training[i])/std_time_series_training[i]
time_series_testing = np.loadtxt(...)# reading test data from file
# the number of time components is the same for training and testing dataset
# z-score of testing data, again using mean and std of training data
for i in range(number_of_time_components):
time_series_testing[:, i] = (time_series_testing[:, i] - mean_time_series_training[i])/std_time_series_training[i]
# GPs
pred_time_series_training = np.zeros((np.shape(time_series_training)))
pred_time_series_testing = np.zeros((np.shape(time_series_testing)))
# Instantiate a Gaussian Process model
kernel = 1.0 * Matern(nu=1.5)
gp = GaussianProcessRegressor(kernel=kernel)
for i in range(number_of_time_components):
print("time component", i)
# Fit to data using Maximum Likelihood Estimation of the parameters
gp.fit(coordinates_training, time_series_training[:,i])
# Make the prediction on the meshed x-axis (ask for MSE as well)
y_pred_train, sigma_train = gp.predict(coordinates_train, return_std=True)
y_pred_test, sigma_test = gp.predict(coordinates_test, return_std=True)
pred_time_series_training[:,i] = y_pred_train*std_time_series_training[i] + mean_time_series_training[i]
pred_time_series_testing[:,i] = y_pred_test*std_time_series_training[i] + mean_time_series_training[i]
# plot training
fig, ax = plt.subplots(5, figsize=(10,20))
for i in range(5):
ax[i].plot(time_series_training[100*i], color='blue', label='Original training')
ax[i].plot(pred_time_series_training[100*i], color='black', label='GP predicted - training')
# plot testing
fig, ax = plt.subplots(5, figsize=(10,20))
for i in range(5):
ax[i].plot(features_time_series_testing[100*i], color='blue', label='Original testing')
ax[i].plot(pred_time_series_testing[100*i], color='black', label='GP predicted - testing')
Here examples of performance on the training data.
Here examples of performance on the testing data.
first you should use the sklearn preprocessing tool to treat your data.
from sklearn.preprocessing import StandardScaler
There are other useful tools to organaize but this specific one its to normalize the data.
Second you should normalize the training set and the test set with the same parameters¡¡ the model will fit the "geometry" of the data to define the parameters, if you train the model with other scale its like use the wrong system of units.
scale = StandardScaler()
training_set = scale.fit_tranform(data_train)
test_set = scale.transform(data_test)
this will use the same tranformation in the sets.
and finaly you need to normalize the features not the traget, I mean to normalize the X entries not the Y output, the normalization helps the model to find the answer faster changing the topology of the objective function in the optimization process the outpu doesnt affect this.
I hope this respond your question.
After having looked through all the docs and examples online, I have not been able to find a way to extract information regarding the confidence or prediction intervals from GPy models.
I generate dummy data like this,
## Generating data for regression
# First, regular sine wave + normal noise
x = np.linspace(0,40, num=300)
noise1 = np.random.normal(0,0.3,300)
y = np.sin(x) + noise1
## Second, an upward trending starting midway, with its own noise as well
temp = x[150:]
noise2 = 0.004*temp**2 + np.random.normal(0,0.1,150)
y[150:] = y[150:] + noise2
plt.plot(x, y)
and then estimate a basic model,
## Pre-processing
X = np.expand_dims(x, axis=1)
Y = np.expand_dims(y, axis=1)
## Model
kernel = GPy.kern.RBF(input_dim=1, variance=1., lengthscale=1.)
model1 = GPy.models.GPRegression(X, Y, kernel)
However, nothing makes it clear how to proceed from there... Another question here tried asking the same thing, but that answer does not work any more, and seems rather unsatisfactory, for such an important element of statistical modelling.
Given a model, and a set of target x values we want to generate the intervals at, you can extract the intervals using:
intervals = model.predict_quantiles( X = target_x_vals, quantiles = (2.5, 97.5) )
You can change the quantiles argument to get the appropriate width ones. The documentation for this function is found at: https://gpy.readthedocs.io/en/deploy/_modules/GPy/core/gp.html
New to tensorflow, python and numpy (I guess that's everything in this sample)
In the code below I (almost) understand that the update_weights.run() call in the loop is calculating the loss and developing new weights. What I don't see is how this actually causes the weights to be changed.
The point I'm stuck on is commented # THIS IS WHAT I DONT UNDERSTAND
What is the relationship between the update_weights.run() and the new values being placed in weights? - Or perhaps; how come when weights.eval is called after the loop that the values have changed?
Thanks for any help
##test {"output": "ignore"}
# Import tf
import tensorflow as tf
# Numpy is Num-Pie n dimensional arrays
# https://en.wikipedia.org/wiki/NumPy
import numpy as np
# Plotting library
# http://matplotlib.org/users/pyplot_tutorial.html
import matplotlib.pyplot as plt
# %matplotlib magic
# http://ipython.readthedocs.io/en/stable/interactive/tutorial.html#magics-explained
%matplotlib inline
# Set up the data with a noisy linear relationship between X and Y.
# Variable?
num_examples = 5
noise_factor = 1.5
line_x_range = (-10,10)
#Just variables in Python
# np.linspace - Return evenly spaced numbers over a specified interval.
X = np.array([
np.linspace(line_x_range[0], line_x_range[1], num_examples),
np.linspace(line_x_range[0], line_x_range[1], num_examples)
])
# Plot out the starting data
# plt.figure(figsize=(4,4))
# plt.scatter(X[0], X[1])
# plt.show()
# npm.random.randn - Return a sample (or samples) from the “standard normal” distribution.
# Generate noise for x and y (2)
noise = np.random.randn(2, num_examples) * noise_factor
# plt.figure(figsize=(4,4))
# plt.scatter(noise[0],noise[1])
# plt.show()
# += on an np.array
X += noise
# The 'Answer' polyfit to the noisy data
answer_m, answer_b = np.polyfit(X[0], X[1], 1)
# Destructuring Assignment - http://codeschool.org/python-additional-miscellany/
x, y = X
# plt.figure(figsize=(4,4))
# plt.scatter(x, y)
# plt.show()
# np.array
# for a in x
# [(1., a) for a in [1,2,3]] => [(1.0, 1), (1.0, 2), (1.0, 3)]
# numpy.ndarray.astype - http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.astype.html
# Copy of the array, cast to a specified type.
x_with_bias = np.array([(1., a) for a in x]).astype(np.float32)
#Just variables in Python
# The difference between our current outputs and the training outputs over time
# Starts high and decreases
losses = []
history = []
training_steps = 50
learning_rate = 0.002
# Start the session and give it a variable name sess
with tf.Session() as sess:
# Set up all the tensors, variables, and operations.
# Creates a constant tensor
input = tf.constant(x_with_bias)
# Transpose the ndarray y of random float numbers
target = tf.constant(np.transpose([y]).astype(np.float32))
# Start with random weights
weights = tf.Variable(tf.random_normal([2, 1], 0, 0.1))
# Initialize variables ...?obscure?
tf.initialize_all_variables().run()
print('Initialization complete')
# tf.matmul - Matrix Multiplication
# What are yhat? Why this name?
yhat = tf.matmul(input, weights)
# tf.sub - Matrix Subtraction
yerror = tf.sub(yhat, target)
# tf.nn.l2_loss - Computes half the L2 norm of a tensor without the sqrt
# loss function?
loss = tf.nn.l2_loss(yerror)
# tf.train.GradientDescentOptimizer - Not sure how this is updating the weights tensor?
# What is it operating on?
update_weights = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# _ in Python is conventionally used for a throwaway variable
for step in range(training_steps):
# Repeatedly run the operations, updating the TensorFlow variable.
# THIS IS WHAT I DONT UNDERSTAND
update_weights.run()
losses.append(loss.eval())
b, m = weights.eval()
history.append((b,m,step))
# Training is done, get the final values for the graphs
betas = weights.eval()
yhat = yhat.eval()
# Show the fit and the loss over time.
# destructuring assignment
fig, (ax1, ax2, ax3) = plt.subplots(1, 3)
# Adjust whitespace between plots
plt.subplots_adjust(wspace=.2)
# Output size of the figure
fig.set_size_inches(12, 4)
ax1.set_title("Final Data Fit")
ax1.axis('equal')
ax1.axis([-15, 15, -15, 15])
# Scatter plot data x and y (pairs?) set with 60% opacity
ax1.scatter(x, y, alpha=.6)
# Scatter plot x and np.transpose(yhat)[0] (must be same length), in red, 50% transparency
# these appear to be the x values mapped onto the
ax1.scatter(x, np.transpose(yhat)[0], c="r", alpha=.5)
# Add the line along the slope defined by betas (whatever that is)
ax1.plot(line_x_range, [betas[0] + a * betas[1] for a in line_x_range], "g", alpha=0.6)
# This polyfit coefficients are reversed in order vs the betas
ax1.plot(line_x_range, [answer_m * a + answer_b for a in line_x_range], "r", alpha=0.3)
ax2.set_title("Loss over Time")
# Create a range of intefers from 0 to training_steps and plot the losses as a curve
ax2.plot(range(0, training_steps), losses)
ax2.set_ylabel("Loss")
ax2.set_xlabel("Training steps")
ax3.set_title("Slope over Time")
ax3.axis('equal')
ax3.axis([-15, 15, -15, 15])
for b, m, step in history:
ax3.plot(line_x_range, [b + a * m for a in line_x_range], "g", alpha=0.2)
# This line seems to be superfluous removing it doesn't change the behaviour
plt.show()
Ok so update_weights() is calling a minimizer on the loss that you defined to be the error between your prediction and the target.
What it will do is it will add some small quantity to the weights (how small is controlled by the learning_rate parameter) to make your loss decrease and hence to make your predictions "truer".
This is what happens when you call update_weights() so after the call your weights have changed from a small value and if everything went according to the plan your loss value has decreased.
What you want is follow the evolution of your loss and the weights see for example if the loss is really decreasing (and your algorithm works) or if the weights are changing a lot or maybe to visualize them.
You can gain a lot of insights by visualizing how the loss is changing.
This is why you have to see the full history of the evolution of the parameters and loss; That is why you eval them at each step.
The eval or run operation is different when you do it on the minimizer and on the parameters when you do it on the minimizer it will apply the minimizer to the weights when you do it on the weights it simply evaluates them.
I strongly advise you to read this website where the author explains far better than me what is going on and in more details.
I am completely new to pymc3, so please excuse the fact that this is likely trivial. I have a very simple model where I am predicting a binary response function. The model is almost a verbatim copy of this example: https://github.com/pymc-devs/pymc3/blob/master/pymc3/examples/gelman_bioassay.py
I get back the model parameters (alpha, beta, and theta), but I can't seem to figure out how to overplot the predictions of the model vs. the input data. I tried doing this (using the parlance of the bioassay model):
from scipy.stats import binom
mean_alpha = mean(trace['alpha'])
mean_beta = mean(trace['beta'])
pred_death = binom.rvs(n, 1./(1.+np.exp(-(mean_alpha + mean_beta * dose))))
and then plotting dose vs. pred_death, but this is manifestly not correct as I get different draws of the binomial distribution every time.
Related to this is another question, how do I evaluate the goodness of fit? I couldn't seem to find anything to that effect in the "getting started" pymc3 tutorial.
Thanks very much for any advice!
Hi a simple way to do it is as follows:
from pymc3 import *
from numpy import ones, array
# Samples for each dose level
n = 5 * ones(4, dtype=int)
# Log-dose
dose = array([-.86, -.3, -.05, .73])
def invlogit(x):
return np.exp(x) / (1 + np.exp(x))
with Model() as model:
# Logit-linear model parameters
alpha = Normal('alpha', 0, 0.01)
beta = Normal('beta', 0, 0.01)
# Calculate probabilities of death
theta = Deterministic('theta', invlogit(alpha + beta * dose))
# Data likelihood
deaths = Binomial('deaths', n=n, p=theta, observed=[0, 1, 3, 5])
start = find_MAP()
step = NUTS(scaling=start)
trace = sample(2000, step, start=start, progressbar=True)
import matplotlib.pyplot as plt
death_fit = np.percentile(trace.theta,50,axis=0)
plt.plot(dose, death_fit,'g', marker='.', lw='1.25', ls='-', ms=5, mew=1)
plt.show()
If you want to plot dose vs pred_death, where pred_death is computed from the mean estimated values of alpha and beta, then do:
pred_death = 1./(1. + np.exp(-(mean_alpha + mean_beta * dose)))
plt.plot(dose, pred_death)
instead if you want to plot dose vs pred_death, where pred_death is computed taking into account the uncertainty in posterior for alpha and beta. Then probably the easiest way is to use the function sample_ppc:
May be something like
ppc = pm.sample_ppc(trace, samples=100, model=pmmodel)
for i in range(100):
plt.plot(dose, ppc['deaths'][i], 'bo', alpha=0.5)
Using Posterior Predictive Checks (ppc) is a way to check how well your model behaves by comparing the predictions of the model to your actual data. Here you have an example of sample_ppc
Other options could be to plot the mean value plus some interval of interest.