Model
I have the following statistical model:
r_i ~ N(r | mu_i, sigma)
mu_i = w . Q_i
w ~ N(w | phi, Sigma)
prior(phi, Sigma) = NormalInvWishart(0, 1, k+1, I_k)
Where sigma is known.
Q_i and r_i (reward) are observed.
In this case, r_i and mu_i are scalars, w is 40x1, Q_i is 1x40, phi is 40x1, and Sigma is 40x40.
LaTeX formatted version: http://mathurl.com/m2utrz4
Python code
I'm trying to create a PyMC model that generates some samples and then approximates phi and Sigma.
import pymc as pm
import numpy as np
SAMPLE_SIZE = 100
q_samples = ... # Q created elsewhere
reward_sigma = np.identity(SAMPLE_SIZE) * 0.1
phi_true = (np.random.rand(40)+1) * -2
sigma_true = np.random.rand(40, 40) * 2. - 1.
weights_true = np.random.multivariate_normal(phi_true, sigma_true)
reward_true = np.random.multivariate_normal(np.dot(q_samples,weights_true), reward_sigma)
with pm.Model() as model:
phi = pm.MvNormal('phi', np.zeros((ndims)), np.identity((ndims)) * 2)
sigma = pm.InverseWishart('sigma', ndims+1, np.identity(ndims))
weights = pm.MvNormal('weights', phi, sigma)
rewards = pm.Normal('rewards', np.dot(weights, q_samples), reward_sigma, observed=reward_true)
with model:
start = pm.find_MAP()
step = pm.NUTS()
trace = pm.sample(3000, step, start)
pm.traceplot(trace)
However, when I run the app, I get the following error:
Traceback (most recent call last):
File "test_pymc.py", line 46, in <module>
phi = pm.MvNormal('phi', np.zeros((ndims)), np.identity((ndims)) * 2)
TypeError: Wrong number of dimensions: expected 0, got 1 with shape (40,).
Am I setting up my model wrong somehow?
I think you're missing the shape parameter for MvNormal. I think MvNormal(..., shape = ndim) should fix the issue. We should probably figure out a way to infer that better.
Related
I am trying to solve a kind of inverse problem by backward propagation with pytorch. I am trying to recover the parameters (r, theta) that generate a vector field U(r,theta).
As I intended to use the LBFGS optimizer from pytorch, I realize that the operation
r*theta
is detected as inplace and thus not supported for the backward computation of the gradient, whereas
r+theta is not.
How can I overcome this ? I actually need to recover fields that use transformations of the form r*theta.
Here is an example of a code that reproduces the error: it is running fine if you change
field = Wrong_U_param(r, theta, positions)
by
field = U_param(r, theta, positions)
in the loop. Is also works if you replace the r*theta operation by r.item()*theta (but is does not optimize over r since there is no more gradient depending on r.
I tried to use torch.mul() to run the product but it also fails.
The error message is the following
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
and the automatic detection points towards this very product.
Thank you for your help !
import numpy as np
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
import torch.optim as optim
from geomloss import SamplesLoss
torch.autograd.set_detect_anomaly(True)
def model(field):
return field
def U_param(r, theta, pos):
result = r + theta + 0. * pos
return result
def Wrong_U_param(r, theta, pos):
result = r * theta + 0. * pos
return result
def learn_U_param(Zobs, ngrad, params, r_guess=0., theta_guess=0., lambd=1.):
Npts = params[0]
positions = torch.tensor(np.arange(0, 1, 1 / Npts) + 1 / 2 / Npts).reshape((Npts, 1))
lab = torch.tensor(np.arange(0, Npts))
r = torch.tensor(float(r_guess)).to(device)
r.requires_grad = True
theta = torch.tensor(float(theta_guess)).to(device)
theta.requires_grad = True
r_hist = [r.item()]
theta_hist = [theta.item()]
loss_hist = []
optimizer = optim.LBFGS([r, theta])
for i in range(ngrad):
field = Wrong_U_param(r, theta, positions)
Z = model(field)
Loss = SamplesLoss(loss="sinkhorn", p=2, blur=.05)
Wass = Loss(lab, Z, positions, lab, Zobs, positions)
def closure():
optimizer.zero_grad()
Wass.backward(retain_graph=True)
return Wass
optimizer.step(closure)
optimizer.zero_grad()
r_hist.append(r.item())
theta_hist.append(theta.item())
loss_hist.append(Wass.item())
return r_hist, theta_hist, loss_hist
N=100
r = 2
theta = 2
params = [N]
positions = torch.tensor(np.arange(0, 1, 1 / N) + 1 / 2 / N).reshape((N, 1))
Zobs = U_param(r, theta, positions)
ngrad = 10
print(learn_U_param(Zobs, ngrad, params, r_guess=0.1, theta_guess=0.1, lambd=1.))
I am trying to use PYMC3 for a Bayesian model where I would like to repeatedly train my model on new unseen data. I am thinking I would need to update the priors with the posterior of the previously trained model every time I see the data, similar to how is achieved here https://docs.pymc.io/notebooks/updating_priors.html. They use the following function that finds the KDE from the samples and replacing each of the original definitions of the parameters in the model with a call to from_posterior.
def from_posterior(param, samples):
smin, smax = np.min(samples), np.max(samples)
width = smax - smin
x = np.linspace(smin, smax, 100)
y = stats.gaussian_kde(samples)(x)
# what was never sampled should have a small probability but not 0,
# so we'll extend the domain and use linear approximation of density on it
x = np.concatenate([[x[0] - 3 * width], x, [x[-1] + 3 * width]])
y = np.concatenate([[0], y, [0]])
return Interpolated(param, x, y)
And here is my original model.
def create_model(batsmen, bowlers, id1, id2, X):
testval = [[-5,0,1,2,3.5,5] for i in range(0, 9)]
l = [i for i in range(9)]
model = pm.Model()
with model:
delta_1 = pm.Uniform("delta_1", lower=0, upper=1)
delta_2 = pm.Uniform("delta_2", lower=0, upper=1)
inv_sigma_sqr = pm.Gamma("sigma^-2", alpha=1.0, beta=1.0)
inv_tau_sqr = pm.Gamma("tau^-2", alpha=1.0, beta=1.0)
mu_1 = pm.Normal("mu_1", mu=0, sigma=1/pm.math.sqrt(inv_tau_sqr), shape=len(batsmen))
mu_2 = pm.Normal("mu_2", mu=0, sigma=1/pm.math.sqrt(inv_tau_sqr), shape=len(bowlers))
delta = pm.math.ge(l, 3) * delta_1 + pm.math.ge(l, 6) * delta_2
eta = [pm.Deterministic("eta_" + str(i), delta[i] + mu_1[id1[i]] - mu_2[id2[i]]) for i in range(9)]
cutpoints = pm.Normal("cutpoints", mu=0, sigma=1/pm.math.sqrt(inv_sigma_sqr), transform=pm.distributions.transforms.ordered, shape=(9,6), testval=testval)
X_ = [pm.OrderedLogistic("X_" + str(i), cutpoints=cutpoints[i], eta=eta[i], observed=X[i]-1) for i in range(9)]
return model
Here, the problem is that some of my parameters such as mu_1, are multidimensional. This is why I get the following error:
ValueError: points have dimension 1, dataset has dimension 1500
because of the line y = stats.gaussian_kde(samples)(x).
Can someone please help me make this work for multi-dimensional parameters? I don't properly understand what KDE is and how the code computes it.
Thank you in advance!!
To find the co variance matrix of a fitted model in python (equivalent to vcov() (R fucntion) in python)
lmfit <- lm(formula = Y ~ X, data=Data_df)
lmpred <- predict(lmfit, newdata=Data_df, se.fit=TRUE, interval = "prediction")
std_er <- sqrt(((X0) %*% vcov(lmfit)) %*% t(X0))
trying to convert the above code in python. For which i need to find the co variance matrix of the fitted model ie, vcov.
I wont be able to use np.cov() as im trying to find the co variance matrix of the model.
i have already used statsmodels.regression.linear_model.OLSResults.cov_params(), But i m not getting the same values as in R.
The scipy ODR code can independently calculate the parameter covariance matrix, here is an example extracted from the source code of my zunzun.com online curve fitter:
from scipy.optimize import curve_fit
import numpy as np
import scipy.odr
import scipy.stats
x = np.array([5.357, 5.797, 5.936, 6.161, 6.697, 6.731, 6.775, 8.442, 9.861])
y = np.array([0.376, 0.874, 1.049, 1.327, 2.054, 2.077, 2.138, 4.744, 7.104])
def f(x,b0,b1):
return b0 + (b1 * x)
def f_wrapper_for_odr(beta, x): # parameter order for odr
return f(x, *beta)
parameters, cov= curve_fit(f, x, y)
model = scipy.odr.odrpack.Model(f_wrapper_for_odr)
data = scipy.odr.odrpack.Data(x,y)
myodr = scipy.odr.odrpack.ODR(data, model, beta0=parameters, maxit=0)
myodr.set_job(fit_type=2)
parameterStatistics = myodr.run()
df_e = len(x) - len(parameters) # degrees of freedom, error
cov_beta = parameterStatistics.cov_beta # parameter covariance matrix from ODR
sd_beta = parameterStatistics.sd_beta * parameterStatistics.sd_beta
ci = []
t_df = scipy.stats.t.ppf(0.975, df_e)
ci = []
for i in range(len(parameters)):
ci.append([parameters[i] - t_df * parameterStatistics.sd_beta[i], parameters[i] + t_df * parameterStatistics.sd_beta[i]])
tstat_beta = parameters / parameterStatistics.sd_beta # coeff t-statistics
pstat_beta = (1.0 - scipy.stats.t.cdf(np.abs(tstat_beta), df_e)) * 2.0 # coef. p-values
for i in range(len(parameters)):
print('parameter:', parameters[i])
print(' conf interval:', ci[i][0], ci[i][1])
print(' tstat:', tstat_beta[i])
print(' pstat:', pstat_beta[i])
print()
print('Covariance matrix:')
print(cov_beta)
Please provide specific details on what you're using.
Assuming you're using numpy arrays for your data, there's numpy.cov estimator
This works for when vcov() returns a 1x1 dataframe. I solved my function in Python using:
fit = scipy.optimize.minimize(fun, x0=x, method = 'L-BFGS-B')
Then, I specified the hessian inverse return value as follows:
vcov = fit['hess_inv'].todense().ravel()
This gave me the same result ~(±1e-3) as stats4::vcov() in R for scenarios where vcov() returns a 1x1 data frame.
I have just installed scipy and tried few example codes. My scipy version is 0.17.0. But I can't figure out what's wrong going with scipy.integrate.solve_bvp. I am trying to run the following code from this link,
# In the second example we solve a simple Sturm-Liouville problem::
# y'' + k**2 * y = 0
# y(0) = y(1) = 0
# It is known that a non-trivial solution y = A * sin(k * x) is possible for
# k = pi * n, where n is an integer. To establish the normalization constant
# A = 1 we add a boundary condition::
# y'(0) = k
# Again we rewrite our equation as a first order system and implement its
# right-hand side evaluation::
# y1' = y2
# y2' = -k**2 * y1
import numpy as np
from scipy.integrate import solve_bvp
def fun(x, y, p):
k = p[0]
return np.vstack((y[1], -k**2 * y[0]))
# Note that parameters p are passed as a vector (with one element in our
# case).
# Implement the boundary conditions:
def bc(ya, yb, p):
k = p[0]
return np.array([ya[0], yb[0], ya[1] - k])
# Setup the initial mesh and guess for y. We aim to find the solution for
# k = 2 * pi, to achieve that we set values of y to approximately follow
# sin(2 * pi * x):
x = np.linspace(0, 1, 5)
y = np.zeros((2, x.size))
y[0, 1] = 1
y[0, 3] = -1
# Run the solver with 6 as an initial guess for k.
sol = solve_bvp(fun, bc, x, y, p=[6])
# We see that the found k is approximately correct:
sol.p[0]
# 6.28329460046
# And finally plot the solution to see the anticipated sinusoid:
x_plot = np.linspace(0, 1, 100)
y_plot = sol.sol(x_plot)[0]
plt.plot(x_plot, y_plot)
plt.xlabel("x")
plt.ylabel("y")
plt.show()
I am getting this error,
Traceback (most recent call last):
File "my.py", line 19, in <module>
from scipy.integrate import solve_bvp
ImportError: cannot import name solve_bvp
Why I am getting this error? Other scipy integrators like scipy.integrate.odeint, scipy.integrate.ode are working fine.
If you go to the documentation for the function, under the heading "Notes" you will see that the function is new as of version 0.18.0
I am generating a time series that has a drastic change in the middle.
import numpy as np
size = 120
x1 = np.random.randn(size)
x2 = np.random.randn(size) * 4
x = np.hstack([x1, x2])
This series of x looks like this:
The goal is now to use PyMC3 to estimate the posterior distribution of the time when the change occurred (switchpoint). This should occur around the index 120. I've used the following code;
from pymc3 import Model, Normal, HalfNormal, DiscreteUniform
basic_model = Model()
with basic_model:
mu1 = Normal('mu1', mu=0, sd=10)
mu2 = Normal('mu2', mu=0, sd=10)
sigma1 = HalfNormal('sigma1', sd=2)
sigma2 = HalfNormal('sigma2', sd=2)
tau = DiscreteUniform('tau', 0, 240)
# get likelihoods
y1 = Normal('y1', mu=mu1, sd=sigma1, observed=x[:tau])
y2 = Normal('y2', mu=mu2, sd=sigma2, observed=x[tau:])
Doing this gives an error that I cannot use tau to slice the array. What would be the approach to solve this in PyMC? It seems like I'll need the slicing to be done by something stochastic in PyMC.
Turns out PyMC3 has a switch model. Let t be the variable for time.
import pymc3 as pm
basic_model = pm.Model()
with basic_model:
mu1 = pm.Normal('mu1', mu=0, sd=10)
mu2 = pm.Normal('mu2', mu=0, sd=10)
sigma1 = pm.HalfNormal('sigma1', sd=2)
sigma2 = pm.HalfNormal('sigma2', sd=2)
switchpoint = pm.DiscreteUniform('switchpoint', t.min(), t.max())
tau_mu = pm.switch(t >= switchpoint, mu1, mu2)
tau_sigma = pm.switch(t >= switchpoint, sigma1, sigma2)
y = pm.Normal('y1', mu=tau_mu, sd=tau_sigma, observed=x)