I'm trying to fit several lines sharing the same intercept.
import numpy as np
import pymc
# Observations
a_actual = np.array([[2., 5., 7.]]).T
b_actual = 3.
t = np.arange(100)
obs = np.random.normal(a_actual * t + b_actual)
# PyMC Model
def model_linear():
b = pymc.Uniform('b', value=1., lower=0, upper=200)
a = []
s = []
r = []
for i in range(len(a_actual)):
s.append(pymc.Uniform('sigma_{}'.format(i), value=1., lower=0, upper=100))
a.append(pymc.Uniform('a_{}'.format(i), value=1., lower=0, upper=200))
r.append(pymc.Normal('r_{}'.format(i), mu=a[i] * t + b, tau=1/s[i]**2, value=obs[i], observed=True))
return [pymc.Container(a), b, pymc.Container(s), pymc.Container(r)]
model = pymc.Model(model_linear())
map = pymc.MAP(model)
map.fit()
map.revert_to_max()
The computed MAP estimates are far from the actual values. Those values are also very sensitive to the lower and upper bounds of sigmas and a, to the actual values of a (e.g. a = [.2, .5, .7] will give me good estimates) or to the number of lines to do the regression on.
Is this the right way of performing my linear regressions?
ps : I tried to use an Exponential prior distribution for sigmas but results were not better.
I think using MAP might not be your best bet. If you are able to do a proper sampling then consider replacing the last 3 lines of your example code with
MCMClinear = pymc.MCMC( model)
MCMClinear.sample(10000,burn=5000,thin=5)
linear_output=MCMClinear.stats()
Printing the linear_output for this gives very accurate inferences for the parameters.
Related
Let's say I have a dataframe with 4 variable. I want to see if I can generate a posterior of gamma mixtures over all the variables, with the goal to find clusters for each observation. I'm guessing I will need some sort of multivariate gamma distribution? But how would I go about this?
Here is some pymc3 code as an example with one parameter, looking for a mixture of two gammas (I have chosen arbitrary parameters):
with pm.Model() as m:
p = pm.Dirichlet('p', a = np.ones(2))
alpha = pm.Gamma('means',alpha = 1, beta = 1, shape = 2)
beta = pm.Gamma('means',alpha = 1, beta = 1, shape = 2)
x = pm.Gammma('x', alpha, beta)
comp_dist = pm.Gamma.dist(means, scale, shape = (2,))
like = pm.Mixture('y', w = p,comp_dists = comp_dist, observed = data)
trace = pm.sample(1000)
So my question is, how would I extend this basic example to multiple variables? I assume that I need to define relationships between the variables somehow to encode them in the model? I feel that I understand the basics of mixture modelling, but at the same time feel that I am missing something pretty fundamental.
Here's how the multidimensional case should work:
J = 4 # num dimensions
K = 2 # num clusters
with pm.Model() as m:
p = pm.Dirichlet('p', a=np.ones(K))
alpha = pm.Gamma('alpha', alpha=1, beta=1, shape=(J,K))
beta = pm.Gamma('beta', alpha=1, beta=1, shape=(J,K))
gamma = pm.Gamma.dist(alpha=alpha, beta=beta, shape=(J,K))
like = pm.Mixture('y', w=p, comp_dists=gamma, observed=X, shape=J)
trace = pm.sample(1000)
where X.shape should be (N,J).
Note on Symmetry Breaking
The difficult part is going to be resolving identifiability issues, but I think that's beyond the scope of the question. Maybe have a look at how the GMM tutorial breaks symmetry using the pm.Potential function. I expect highly-correlated parameterizations of the likelihood function(s), like alpha and beta, would exacerbate the issue, so perhaps consider switching to the mu and sigma parameterization.
I want to get kernel density estimation for positive data points. Using Python Scipy Stats package, I came up with the following code.
def get_pdf(data):
a = np.array(data)
ag = st.gaussian_kde(a)
x = np.linspace(0, max(data), max(data))
y = ag(x)
return x, y
This works perfectly for most data sets, but it gives an erroneous result for "all positive" data points. To make sure this works correctly, I use numerical integration to compute the area under this curve.
def trapezoidal_2(ag, a, b, n):
h = np.float(b - a) / n
s = 0.0
s += ag(a)[0]/2.0
for i in range(1, n):
s += ag(a + i*h)[0]
s += ag(b)[0]/2.0
return s * h
Since the data is spread in the region (0, int(max(data))), we should get a value close to 1, when executing the following line.
b = 1
data = st.pareto.rvs(b, size=10000)
data = list(data)
a = np.array(data)
ag = st.gaussian_kde(a)
trapezoidal_2(ag, 0, int(max(data)), int(max(data))*2)
But it gives a value close to 0.5 when I test.
But when I intergrate from -100 to max(data), it provides a value close to 1.
trapezoidal_2(ag, -100, int(max(data)), int(max(data))*2+200)
The reason is, ag (KDE) is defined for values less than 0, even though the original data set contains only positive values.
So how can I get a kernel density estimation that considers only positive values, such that area under the curve in the region (o, max(data)) is close to 1?
The choice of the bandwidth is quite important when performing kernel density estimation. I think the Scott's Rule and Silverman's Rule work well for distribution similar to a Gaussian. However, they do not work well for the Pareto distribution.
Quote from the doc:
Bandwidth selection strongly influences the estimate obtained from
the KDE (much more so than the actual shape of the kernel). Bandwidth selection
can be done by a "rule of thumb", by cross-validation, by "plug-in
methods" or by other means; see [3], [4] for reviews. gaussian_kde
uses a rule of thumb, the default is Scott's Rule.
Try with different bandwidth values, for example:
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
b = 1
sample = stats.pareto.rvs(b, size=3000)
kde_sample_scott = stats.gaussian_kde(sample, bw_method='scott')
kde_sample_scalar = stats.gaussian_kde(sample, bw_method=1e-3)
# Compute the integrale:
print('integrale scott:', kde_sample_scott.integrate_box_1d(0, np.inf))
print('integrale scalar:', kde_sample_scalar.integrate_box_1d(0, np.inf))
# Graph:
x_span = np.logspace(-2, 1, 550)
plt.plot(x_span, stats.pareto.pdf(x_span, b), label='theoretical pdf')
plt.plot(x_span, kde_sample_scott(x_span), label="estimated pdf 'scott'")
plt.plot(x_span, kde_sample_scalar(x_span), label="estimated pdf 'scalar'")
plt.xlabel('X'); plt.legend();
gives:
integrale scott: 0.5572130540733236
integrale scalar: 0.9999999999968957
and:
We see that the kde using the Scott method is wrong.
I have two data sets where two values where measured. I am interested in the difference between the value and the standard deviation of the difference. I made a histogram which I would like to fit two normal distributions. To calculate the difference between the maxima. I also would like to evaluate the effect that in on data set I have much less data on one value. I've already looked at this link but it is not really what I need:
Python: finding the intersection point of two gaussian curves
for ii in range(2,8):
# Kanal = ii - 1
file = filepath + '\Mappe1.txt'
data = np.loadtxt(file, delimiter='\t', skiprows=1)
data = data[:,ii]
plt.hist(data,bins=100)
plt.xlabel("bins")
plt.ylabel("Counts")
plt.tight_layout()
plt.grid()
plt.figure()
plt.show()
Quick and dirty fitting can be readily achieved using scipy:
from scipy.optimize import curve_fit #non linear curve fitting tool
from matplotlib import pyplot as plt
def func2fit(x1,x2,m_1,m_2,std_1,std_2,height1, height2): #define a simple gauss curve
return height1*exp(-(x1-m_1)**2/2/std_1**2)+height2*exp(-(x2-m_2)**2/2/std_2**2)
init_guess=(-.3,.3,.5,.5,3000,3000)
#contains the initial guesses for the parameters (m_1, m_2, std_1, std_2, height1, height2) using your first figure
#do the fitting
fit_pars, pcov =curve_fit(func2fit,xdata,ydata,init_guess)
#fit_pars contains the mean, the heights and the SD values, pcov contains the estimated covariance of these parameters
plt.plot(xdata,func2fit(xdata,*fit_pars),label='fit') #plot the fit
For further reference consult the scipy manual page:
curve_fit
Assuming that the two samples are independent there is no need to handle this problem using curve fitting. It's basic statistics. Here's some code that does the calculations required, with the source attributed in a comment.
## adapted from http://onlinestatbook.com/2/estimation/difference_means.html
from random import gauss
from numpy import sqrt
sample_1 = [ gauss(0,1) for _ in range(10) ]
sample_2 = [ gauss(1,.5) for _ in range(20) ]
n_1 = len(sample_1)
n_2 = len(sample_2)
mean_1 = sum(sample_1)/n_1
mean_2 = sum(sample_2)/n_2
SSE = sum([(_-mean_1)**2 for _ in sample_1]) + sum([(_-mean_2)**2 for _ in sample_2])
df = (n_1-1) + (n_2-1)
MSE = SSE/df
n_h = 2 / ( 1/n_1 + 1/n_2 )
s_mean_diff = sqrt( 2* MSE / n_h )
print ( 'difference between means', abs(n_1-n_2))
print ( 'std dev of this difference', s_mean_diff )
I am completely new to pymc3, so please excuse the fact that this is likely trivial. I have a very simple model where I am predicting a binary response function. The model is almost a verbatim copy of this example: https://github.com/pymc-devs/pymc3/blob/master/pymc3/examples/gelman_bioassay.py
I get back the model parameters (alpha, beta, and theta), but I can't seem to figure out how to overplot the predictions of the model vs. the input data. I tried doing this (using the parlance of the bioassay model):
from scipy.stats import binom
mean_alpha = mean(trace['alpha'])
mean_beta = mean(trace['beta'])
pred_death = binom.rvs(n, 1./(1.+np.exp(-(mean_alpha + mean_beta * dose))))
and then plotting dose vs. pred_death, but this is manifestly not correct as I get different draws of the binomial distribution every time.
Related to this is another question, how do I evaluate the goodness of fit? I couldn't seem to find anything to that effect in the "getting started" pymc3 tutorial.
Thanks very much for any advice!
Hi a simple way to do it is as follows:
from pymc3 import *
from numpy import ones, array
# Samples for each dose level
n = 5 * ones(4, dtype=int)
# Log-dose
dose = array([-.86, -.3, -.05, .73])
def invlogit(x):
return np.exp(x) / (1 + np.exp(x))
with Model() as model:
# Logit-linear model parameters
alpha = Normal('alpha', 0, 0.01)
beta = Normal('beta', 0, 0.01)
# Calculate probabilities of death
theta = Deterministic('theta', invlogit(alpha + beta * dose))
# Data likelihood
deaths = Binomial('deaths', n=n, p=theta, observed=[0, 1, 3, 5])
start = find_MAP()
step = NUTS(scaling=start)
trace = sample(2000, step, start=start, progressbar=True)
import matplotlib.pyplot as plt
death_fit = np.percentile(trace.theta,50,axis=0)
plt.plot(dose, death_fit,'g', marker='.', lw='1.25', ls='-', ms=5, mew=1)
plt.show()
If you want to plot dose vs pred_death, where pred_death is computed from the mean estimated values of alpha and beta, then do:
pred_death = 1./(1. + np.exp(-(mean_alpha + mean_beta * dose)))
plt.plot(dose, pred_death)
instead if you want to plot dose vs pred_death, where pred_death is computed taking into account the uncertainty in posterior for alpha and beta. Then probably the easiest way is to use the function sample_ppc:
May be something like
ppc = pm.sample_ppc(trace, samples=100, model=pmmodel)
for i in range(100):
plt.plot(dose, ppc['deaths'][i], 'bo', alpha=0.5)
Using Posterior Predictive Checks (ppc) is a way to check how well your model behaves by comparing the predictions of the model to your actual data. Here you have an example of sample_ppc
Other options could be to plot the mean value plus some interval of interest.
I successfully implemented a mixture of 3 normals using PyMC (shown at https://drive.google.com/file/d/0Bwnmbh6ueWhqSkUtV1JFZDJwLWc, and similar to the question asked at How to model a mixture of 3 Normals in PyMC?)
My next step is to try and code mixtures of multivariate normals.
There is, however, an additional complexity to the data - a hierarchy, with sets of observations belonging to a parent observation. The clustering is done on the parent observations, and not on the individual observations themselves. This first step generates the code (60 parents, with 50 observations per each parent), and works fine.
import numpy as np
import pymc as mc
n = 3 #mixtures
B = 5 #Bias between those at different mixtures
tau = 3 #Variances
nprov = 60 #number of parent observations
mu = [[0,0],[0,B],[-B,0]]
true_cov0 = np.array([[1.,0.],[0.,1.]])
true_cov1 = np.array([[1.,0.],[0.,tau**(2)]])
true_cov2 = np.array([[tau**(-2),0],[0.,1.]])
trueprobs = [.4, .3, .3] #probability of being in each of the three mixtures
prov = np.random.multinomial(1, trueprobs, size=nprov)
v = prov[:,1] + (prov[:,2])*2
numtoeach = 50
n_obs = nprov*numtoeach
vAll = np.tile(v,numtoeach)
ndata = numtoeach*nprov
p1 = range(nprov)
prov1 = np.tile(p1,numtoeach)
data = (vAll==0)*(np.random.multivariate_normal(mu[0],true_cov0,ndata)).T \
+ (vAll==1)*(np.random.multivariate_normal(mu[1],true_cov1,ndata)).T \
+ (vAll==2)*(np.random.multivariate_normal(mu[2],true_cov2,ndata)).T
data=data.T
However, when I try and use PyMC to do the sampling, I run intro trouble ('error: failed in converting 3rd argument `tau' of flib.prec_mvnorm to C/Fortran array')
p = 2 #covariates
prior_mu1=np.ones(p)
prior_mu2=np.ones(p)
prior_mu3=np.ones(p)
post_mu1 = mc.Normal("returns1",prior_mu1,1,size=p)
post_mu2 = mc.Normal("returns2",prior_mu2,1,size=p)
post_mu3 = mc.Normal("returns3",prior_mu3,1,size=p)
post_cov_matrix_inv1 = mc.Wishart("cov_matrix_inv1",n_obs,np.eye(p) )
post_cov_matrix_inv2 = mc.Wishart("cov_matrix_inv2",n_obs,np.eye(p) )
post_cov_matrix_inv3 = mc.Wishart("cov_matrix_inv3",n_obs,np.eye(p) )
#Combine prior means and variance matrices
meansAll= np.array([post_mu1,post_mu2,post_mu3])
precsAll= np.array([post_cov_matrix_inv1,post_cov_matrix_inv2,post_cov_matrix_inv3])
dd = mc.Dirichlet('dd', theta=(1,)*n)
category = mc.Categorical('category', p=dd, size=nprov)
#This step accounts for the hierarchy: observations' means are equal to their parents mean
#Parent is labeled prov1
#mc.deterministic
def mean(category=category, meansAll=meansAll):
lat = category[prov1]
new = meansAll[lat]
return new
#mc.deterministic
def prec(category=category, precsAll=precsAll):
lat = category[prov1]
return precsAll[lat]
obs = mc.MvNormal( "observed returns", mean, prec, observed = True, value = data)
I know the problem is not with the format of the simulated observed data, because this step would work fine, in place of the above:
obs = mc.MvNormal( "observed returns", post_mu3, post_cov_matrix_inv3, observed = True, value = data )
As a result, I think the issue is how the mean vector ('mean') and the covariance matrix ('prec') are entered, I just don't know how. Like I said, this worked fine with mixtures of normal distributions, but mixtures of multivariate normals is adding a complexity I can't figure out.
This is a good example of the difficulty PyMC has with vectors of multivariate variables. Not that its difficult--just not as elegant as it should be. You should create a list comprehension of the MVN nodes and wrap that as an observed stochastic.
#mc.observed
def obs(value=data, mean=mean, prec=prec):
return sum(mc.mv_normal_like(v, m, T) for v,m,T in zip(data, mean, prec))
Here is the IPython notebook