Metropolis Sampling - python

I'm working through a book called Bayesian Analysis in Python. The book focuses heavily on the package PyMC3 but is a little vague on the theory behind it so I'm quite confused.
Say I have data like this:
data = np.array([51.06, 55.12, 53.73, 50.24, 52.05, 56.40, 48.45, 52.34, 55.65, 51.49, 51.86, 63.43, 53.00, 56.09, 51.93, 52.31, 52.33, 57.48, 57.44, 55.14, 53.93, 54.62, 56.09, 68.58, 51.36, 55.47, 50.73, 51.94, 54.95, 50.39, 52.91, 51.5, 52.68, 47.72, 49.73, 51.82, 54.99, 52.84, 53.19, 54.52, 51.46, 53.73, 51.61, 49.81, 52.42, 54.3, 53.84, 53.16])
And I'm looking at a model like this:
Using Metropolis Sampling,
how can I fit a model that estimates mu and sigma.
Here is my guess at pseudo-code from what I've read:
M, S = 50, 1
G = 1
# These are priors right?
mu = stats.norm(loc=M, scale=S)
sigma = stats.halfnorm(scale=G)
target = stats.norm
steps = 1000
mu_samples = [50]
sigma_samples = [1]
for i in range(steps):
# proposed sample...
mu_i, sigma_i = mu.rvs(), sigma.rvs()
# Something happens here
# How do I calculate the likelidhood??
"..."
# some evaluation of a likelihood ratio??
a = "some"/"ratio"
acceptance_bar = np.random.random()
if a > acceptance_bar:
mu_samples.append(mu_i)
sigma_samples.append(sigma_i)
What am I missing??

I hope the following example helps you. In this example I am going to assume we know the value of sigma, so we only have a prior for mu.
sigma = data.std() # we are assuming we know sigma
steps = 1000
mu_old = data.mean() # initial value, just a good guest
mu_samples = []
# we evaluate the prior for the initial point
prior_old = stats.norm(M, S).pdf(mu_old)
# we evaluate the likelihood for the initial point
likelihood_old = np.prod(stats.norm(mu_old, sigma).pdf(data))
# Bayes' theorem (omitting the denominator) for the initial point
post_old = prior_old * likelihood_old
for i in range(steps):
# proposal distribution, propose a new value from the old one
mu_new = stats.norm.rvs(mu_old, 0.1)
# we evaluate the prior
prior_new = stats.norm(M, S).pdf(mu_new)
# we evaluate the likelihood
likelihood_new = np.prod(stats.norm(mu_new, sigma).pdf(data))
# Bayes' theorem (omitting the denominator)
post_new = prior_new * likelihood_new
# the ratio of posteriors (we do not need to know the normalizing constant)
a = post_new / post_old
if np.random.random() < a:
mu_old = mu_new
post_old = post_new
mu_samples.append(mu_old)
Notes:
Notice that I have defined a proposal distribution, which in this case is a Gaussian centered at mu_old with a standard deviation of 0.1 (an arbitrary value). In practice the efficiency of MH depends heavily on the proposal distribution, thus PyMC3 (as well as other practical implementations of MH) use some heuristic to tune the proposal distribution.
For simplicity I used the pdf in this example but in practice is convenient to work with the logpdf. This avoid underflow problems without changing the results.
The likelihood is computed as a product
The ratio you were missing is the ratio of posteriors
If you do not accept the new proposed value, then you save (again) the old value.
Remember to check this repository for the Errata and for updated versions of the code. The major difference with the updated code respect to the code in the book, is that now the preferred way to run a model with PyMC3 is to just use pm.sample() and let PyMC3 choose the samplers and initialization points for you.

Related

How can I generate numbers in a set range but skewed towards a specific point? [duplicate]

I would like to implement a function in python (using numpy) that takes a mathematical function (for ex. p(x) = e^(-x) like below) as input and generates random numbers, that are distributed according to that mathematical-function's probability distribution. And I need to plot them, so we can see the distribution.
I need actually exactly a random number generator function for exactly the following 2 mathematical functions as input, but if it could take other functions, why not:
1) p(x) = e^(-x)
2) g(x) = (1/sqrt(2*pi)) * e^(-(x^2)/2)
Does anyone have any idea how this is doable in python?
For simple distributions like the ones you need, or if you have an easy to invert in closed form CDF, you can find plenty of samplers in NumPy as correctly pointed out in Olivier's answer.
For arbitrary distributions you could use Markov-Chain Montecarlo sampling methods.
The simplest and maybe easier to understand variant of these algorithms is Metropolis sampling.
The basic idea goes like this:
start from a random point x and take a random step xnew = x + delta
evaluate the desired probability distribution in the starting point p(x) and in the new one p(xnew)
if the new point is more probable p(xnew)/p(x) >= 1 accept the move
if the new point is less probable randomly decide whether to accept or reject depending on how probable1 the new point is
new step from this point and repeat the cycle
It can be shown, see e.g. Sokal2, that points sampled with this method follow the acceptance probability distribution.
An extensive implementation of Montecarlo methods in Python can be found in the PyMC3 package.
Example implementation
Here's a toy example just to show you the basic idea, not meant in any way as a reference implementation. Please refer to mature packages for any serious work.
def uniform_proposal(x, delta=2.0):
return np.random.uniform(x - delta, x + delta)
def metropolis_sampler(p, nsamples, proposal=uniform_proposal):
x = 1 # start somewhere
for i in range(nsamples):
trial = proposal(x) # random neighbour from the proposal distribution
acceptance = p(trial)/p(x)
# accept the move conditionally
if np.random.uniform() < acceptance:
x = trial
yield x
Let's see if it works with some simple distributions
Gaussian mixture
def gaussian(x, mu, sigma):
return 1./sigma/np.sqrt(2*np.pi)*np.exp(-((x-mu)**2)/2./sigma/sigma)
p = lambda x: gaussian(x, 1, 0.3) + gaussian(x, -1, 0.1) + gaussian(x, 3, 0.2)
samples = list(metropolis_sampler(p, 100000))
Cauchy
def cauchy(x, mu, gamma):
return 1./(np.pi*gamma*(1.+((x-mu)/gamma)**2))
p = lambda x: cauchy(x, -2, 0.5)
samples = list(metropolis_sampler(p, 100000))
Arbitrary functions
You don't really have to sample from proper probability distributions. You might just have to enforce a limited domain where to sample your random steps3
p = lambda x: np.sqrt(x)
samples = list(metropolis_sampler(p, 100000, domain=(0, 10)))
p = lambda x: (np.sin(x)/x)**2
samples = list(metropolis_sampler(p, 100000, domain=(-4*np.pi, 4*np.pi)))
Conclusions
There is still way too much to say, about proposal distributions, convergence, correlation, efficiency, applications, Bayesian formalism, other MCMC samplers, etc.
I don't think this is the proper place and there is plenty of much better material than what I could write here available online.
The idea here is to favor exploration where the probability is higher but still look at low probability regions as they might lead to other peaks. Fundamental is the choice of the proposal distribution, i.e. how you pick new points to explore. Too small steps might constrain you to a limited area of your distribution, too big could lead to a very inefficient exploration.
Physics oriented. Bayesian formalism (Metropolis-Hastings) is preferred these days but IMHO it's a little harder to grasp for beginners. There are plenty of tutorials available online, see e.g. this one from Duke university.
Implementation not shown not to add too much confusion, but it's straightforward you just have to wrap trial steps at the domain edges or make the desired function go to zero outside the domain.
NumPy offers a wide range of probability distributions.
The first function is an exponential distribution with parameter 1.
np.random.exponential(1)
The second one is a normal distribution with mean 0 and variance 1.
np.random.normal(0, 1)
Note that in both case, the arguments are optional as these are the default values for these distributions.
As a sidenote, you can also find those distributions in the random module as random.expovariate and random.gauss respectively.
More general distributions
While NumPy will likely cover all your needs, remember that you can always compute the inverse cumulative distribution function of your distribution and input values from a uniform distribution.
inverse_cdf(np.random.uniform())
By example if NumPy did not provide the exponential distribution, you could do this.
def exponential():
return -np.log(-np.random.uniform())
If you encounter distributions which CDF is not easy to compute, then consider filippo's great answer.

simulated annealing in python with multiple variables

I found this old stackoverflow article that essentially is exactly what I want.
Algorithm to optimize multiple variables more efficiently than trial-and-error
unforunately my more advanced maths are a bit lacking and I have some questions about the answer by ElKamina, if anyone can take a look and advise some of these basic math concepts, hopefully it will help me out.
The answer I am referring to is as follows:
def simAnneal( w, seed_x, numSteps=100000, sigma=0.01 ):
optimal_x = [i for i in seed_x]
optimal_w = w(optimal_x)
cur_w = w(seed_x)
for i in range(numSteps):
new_x = [i+random.gauss(0, sigma) for i in seed_x]
new_w = w(new_x)
if (new_w > cur_w) or (random.random() > new_w / cur_w) :
cur_x = new_x
cur_w = new_w
if cur_w > optimal_w:
optimal_w = cur_w
optimal_x = cur_x
return optimal_x
I am unfamiliar with seed_x, sigma and gaussian distribution so I am not sure how they are coming up with new_x.
I am attempting to solve a value based on many variables, (>10) and am trying to optimize better than randomly guessing as it would take forever.
Thanks!
Simulated Annealing TLDR:
We're trying to find a set of parameters that will maximize a function by adding random noise to parameters. If change leads to improvement, changes are accepted; once in a while we accept negative changes, but the probability of that lowers with time and how bad the change is.
In the snippet above, the function actually uses multiple parameters but accepts them as a list:
w is the function which parameters are optimized
seed_x is the initial guess of parameters - can be selected at random, but an informed guess would be better
Gaussian is just "shape" of the noise, such that small values are more common. random.random()*sigma (all values are equally likely) would work just fine there, too.
sigma is the magnitude of noise to be injected. It should not exceed a couple percent of typical param values. If param values vastly differ in magnitude, consider using a list of sigmas specific for each parameter.
MISSING: notion of temperature, which will actually make it simulated annealing
Rewriting it with temperature, more descriptive names, and more explicit:
def simAnneal(utility_func, initial_params, numSteps=100000,
noise_magnitude=0.01, cooling_rate=0.999):
optimal_params = initial_params
params = initial_params.copy() # lists are mutable, so .copy()
best_utility = utility = utility_func(*initial_params)
temperature = 1.0
for i in range(numSteps):
temperature *= cooling_rate
# consider using numpy/scipy for params and noise
new_params = [param+random.gauss(0, noise_magnitude)
for param in params]
# explicitly passing multiple parameters
new_utility = utility_func(*new_params)
if (new_utility > best_utility
or random.random()*temperature > new_utility / best_utility):
params, utility = new_params, new_utility
if new_utility > best_utility:
optimal_params, best_utility = params, utility
return optimal_params
Last but not least - unless the problem is extremely non-convex, I'd bet SGD would perform much better.

Is the code proper way of understanding Vae vs. Standard Autoencoder?

I have created two mini encoding networks for Standard autoencoder and VAE and ploted each. Would just like to know if my understanding is correct for this mini case. Note it's only one epoch and it ends with encoding.
import numpy as np
from matplotlib import pyplot as plt
np.random.seed(0)
fig, (ax,ax2) = plt.subplots(2,1)
def relu(x):
c = np.where(x>0,x,0)
return c
#Standard autoencoder
x = np.random.randint(0,2,[100,5])
w_autoencoder = np.random.normal(0,1,[5,2])
bottle_neck = relu(x.dot(w_autoencoder))
ax.scatter(bottle_neck[:,0],bottle_neck[:,1])
#VAE autoencoder
w_vae1 = np.random.normal(0,1,[5,2])
w_vae2 = np.random.normal(0,1,[5,2])
mu = relu(x.dot(w_vae1))
sigma = relu(x.dot(w_vae2))
epsilon_sample = np.random.normal(0,1,[100,2])
latent_space = mu+np.log2(sigma)*epsilon_sample
ax2.scatter(latent_space[:,0], latent_space[:,1],c='red')
w_vae1 = np.random.normal(0,1,[5,2])
w_vae2 = np.random.normal(0,1,[5,2])
mu = relu(x.dot(w_vae1))
sigma = relu(x.dot(w_vae2))
epsilon_sample = np.random.normal(0,1,[100,2])
latent_space = mu+np.log2(sigma)*epsilon_sample
ax2.scatter(latent_space[:,0], latent_space[:,1],c='red')
Since your motive is "understanding", I should say you are in the right direction and working on this sort of implementation definitely helps you in understanding. But I strongly believe "understanding" has to be achieved first in books/papers and only then via the implementation/code.
On a quick glance, your standard autoencoder looks fine. You are making an assumption via your implementation that your latent code would be in the range of (0,infinity) using relu(x).
However, while doing the implementation of VAE, you can't achieve the latent code with relu(x) function. This is where your "theoretical" understanding is missing. In standard VAE, we make an assumption that the latent code is a sample from a Gaussian distribution and as such we approximate the parameters of that Gaussian distribution i.e. mean and covariance. Further, we also make another assumption that this Gaussian distribution is factorial which means the covariance matrix is diagonal. In your implementation, you are approximating the mean and diagonal covariance as:
mu = relu(x.dot(w_vae1))
sigma = relu(x.dot(w_vae2))
which seems fine but while getting sample (reparameterization trick), not sure why you introduced np.log2(). Since you are using ReLU() activation, you may end up with 0 in your sigma variable and when you do np.log2(0), you will get inf. I believe you were motivated by some available code where they do:
mu = relu(x.dot(w_vae1)) #same as yours
logOfSigma = x.dot(w_vae2) #you are forcing your network to learn log(sigma)
Now since you are approximating log of sigma, you can allow your output to be negative because to get the sigma, you would do something like np.exp(logOfSigma) and this would ensure you would always get positive values in your diagonal covariance matrix. Now to do sampling, you can simply do:
latent_code = mu + np.exp(logOfSigma)*epsilon_sample
Hope this helps!

Producing an MLE for a pair of distributions in python

Ok, so my current curve fitting code has a step that uses scipy.stats to determine the right distribution based on the data,
distributions = [st.laplace, st.norm, st.expon, st.dweibull, st.invweibull, st.lognorm, st.uniform]
mles = []
for distribution in distributions:
pars = distribution.fit(data)
mle = distribution.nnlf(pars, data)
mles.append(mle)
results = [(distribution.name, mle) for distribution, mle in zip(distributions, mles)]
for dist in sorted(zip(distributions, mles), key=lambda d: d[1]):
print dist
best_fit = sorted(zip(distributions, mles), key=lambda d: d[1])[0]
print 'Best fit reached using {}, MLE value: {}'.format(best_fit[0].name, best_fit[1])
print [mod[0].name for mod in sorted(zip(distributions, mles), key=lambda d: d[1])]
Where data is a list of numeric values. This is working great so far for fitting unimodal distributions, confirmed in a script that randomly generates values from random distributions and uses curve_fit to redetermine the parameters.
Now I would like to make the code able to handle bimodal distributions, like the example below:
Is it possible to get a MLE for a pair of models from scipy.stats in order to determine if a particular pair of distributions are a good fit for the data?, something like
distributions = [st.laplace, st.norm, st.expon, st.dweibull, st.invweibull, st.lognorm, st.uniform]
distributionPairs = [[modelA.name, modelB.name] for modelA in distributions for modelB in distributions]
and use those pairs to get an MLE value of that pair of distributions fitting the data?
It's not a complete answer but it may help you to solve your problem. Let say you know your problem is generated by two densities.
A solution would be to use k-mean or EM algorithm.
Initalization.
You initialize your algorithm by affecting every observation to one or the other density. And you initialize the two densities (you initialize the parameters of the density, and one of the parameter in your case is "gaussian", "laplace", and so on...
Iteration.
Then, iterately, you run the two following steps :
Step 1.
Optimize the parameters assuming that the affectation of every point is right. You can now use any optimization solver. This step provide you with an estimation of the best two densities (with given parameter) that fit your data.
Step 2.
You classify every observation to one density or the other according to the greatest likelihood.
You repeat until convergence.
This is very well explained in this web-page
https://people.duke.edu/~ccc14/sta-663/EMAlgorithm.html
If you do not know how many densities have generated your data, the problem is more difficult. You have to work with penalized classification problem, which is a bit harder.
Here is a coding example in an easy case : you know that your data comes from 2 different Gaussians (you don't know how many variables are generated from each density). In your case, you can adjust this code to loop on every possible pair of density (computationally longer, but would empirically work I presume)
import scipy.stats as st
import numpy as np
#hard coded data generation
data = np.random.normal(-3, 1, size = 1000)
data[600:] = np.random.normal(loc = 3, scale = 2, size=400)
#initialization
mu1 = -1
sigma1 = 1
mu2 = 1
sigma2 = 1
#criterion to stop iteration
epsilon = 0.1
stop = False
while not stop :
#step1
classification = np.zeros(len(data))
classification[st.norm.pdf(data, mu1, sigma1) > st.norm.pdf(data, mu2, sigma2)] = 1
mu1_old, mu2_old, sigma1_old, sigma2_old = mu1, mu2, sigma1, sigma2
#step2
pars1 = st.norm.fit(data[classification == 1])
mu1, sigma1 = pars1
pars2 = st.norm.fit(data[classification == 0])
mu2, sigma2 = pars2
#stopping criterion
stop = ((mu1_old - mu1)**2 + (mu2_old - mu2)**2 +(sigma1_old - sigma1)**2 +(sigma2_old - sigma2)**2) < epsilon
#result
print("The first density is gaussian :", mu1, sigma1)
print("The first density is gaussian :", mu2, sigma2)
print("A rate of ", np.mean(classification), "is classified in the first density")
Hope it helps.

Truncated multivariate normal in SciPy?

I'm trying to automate a process that at some point needs to draw samples from a truncated multivariate normal. That is, it's a normal multivariate normal distribution (i.e. Gaussian) but the variables are constrained to a cuboid. My given inputs are the mean and covariance of the full multivariate normal but I need samples in my box.
Up to now, I'd just been rejecting samples outside the box and resampling as necessary, but I'm starting to find that my process sometimes gives me (a) large covariances and (b) means that are close to the edges. These two events conspire against the speed of my system.
So what I'd like to do is sample the distribution correctly in the first place. Googling led only to this discussion or the truncnorm distribution in scipy.stats. The former is inconclusive and the latter seems to be for one variable. Is there any native multivariate truncated normal? And is it going to be any better than rejecting samples, or should I do something smarter?
I'm going to start working on my own solution, which would be to rotate the untruncated Gaussian to it's principal axes (with an SVD decomposition or something), use a product of truncated Gaussians to sample the distribution, then rotate that sample back, and reject/resample as necessary. If the truncated sampling is more efficient, I think this should sample the desired distribution faster.
So, according to the Wikipedia article, sampling a multivariate truncated normal distribution (MTND) is more difficult. I ended up taking a relatively easy way out and using an MCMC sampler to relax an initial guess towards the MTND as follows.
I used emcee to do the MCMC work. I find this package phenomenally easy-to-use. It only requires a function that returns the log-probability of the desired distribution. So I defined this function
from numpy.linalg import inv
def lnprob_trunc_norm(x, mean, bounds, C):
if np.any(x < bounds[:,0]) or np.any(x > bounds[:,1]):
return -np.inf
else:
return -0.5*(x-mean).dot(inv(C)).dot(x-mean)
Here, C is the covariance matrix of the multivariate normal. Then, you can run something like
S = emcee.EnsembleSampler(Nwalkers, Ndim, lnprob_trunc_norm, args = (mean, bounds, C))
pos, prob, state = S.run_mcmc(pos, Nsteps)
for given mean, bounds and C. You need an initial guess for the walkers' positions pos, which could be a ball around the mean,
pos = emcee.utils.sample_ball(mean, np.sqrt(np.diag(C)), size=Nwalkers)
or sampled from an untruncated multivariate normal,
pos = numpy.random.multivariate_normal(mean, C, size=Nwalkers)
and so on. I personally do several thousand steps of sample discarding first, because it's fast, then force the remaining outliers back within the bounds, then run the MCMC sampling.
The number of steps for convergence is up to you.
Note also that emcee easily supports basic parallelization by adding the argument threads=Nthreads to the EnsembleSampler initialization. So you can make this blazing fast.
I have reimplemented an algorithm which does not depend on MCMC but creates independent and identically distributed (iid) samples from the truncated multivariate normal distribution. Having iid samples can be very useful! I used to also use emcee as described in the answer by Warrick, but for convergence the number of samples needed exploded in higher dimensions, making it impractical for my use case.
The algorithm was introduced by Botev (2016) and uses an accept-reject algorithm based on minimax exponential tilting. It was originally implemented in MATLAB but reimplementing it for Python increased the performance significantly compared to running it using the MATLAB engine in Python. It also works well and is fast at higher dimensions.
The code is available at: https://github.com/brunzema/truncated-mvn-sampler.
An Example:
d = 10 # dimensions
# random mu and cov
mu = np.random.rand(d)
cov = 0.5 - np.random.rand(d ** 2).reshape((d, d))
cov = np.triu(cov)
cov += cov.T - np.diag(cov.diagonal())
cov = np.dot(cov, cov)
# constraints
lb = np.zeros_like(mu) - 1
ub = np.ones_like(mu) * np.inf
# create truncated normal and sample from it
n_samples = 100000
tmvn = TruncatedMVN(mu, cov, lb, ub)
samples = tmvn.sample(n_samples)
Plotting the first dimension results in:
Reference:
Botev, Z. I., (2016), The normal law under linear restrictions: simulation and estimation via minimax tilting, Journal of the Royal Statistical Society Series B, 79, issue 1, p. 125-148
Simulating truncated multivariate normal can be tricky and usually involves some conditional sampling by MCMC.
My short answer is, you can use my code (https://github.com/ralphma1203/trun_mvnt)!!! It implements the Gibbs sampler algorithm from , which can handle general linear constraints in the form of , even when you have non-full rank D and more constraints than the dimensionality.
import numpy as np
from trun_mvnt import rtmvn, rtmvt
########## Traditional problem, probably what you need... ##########
##### lower < X < upper #####
# So D = identity matrix
D = np.diag(np.ones(4))
lower = np.array([-1,-2,-3,-4])
upper = -lower
Mean = np.zeros(4)
Sigma = np.diag([1,2,3,4])
n = 10 # want 500 final sample
burn = 100 # burn-in first 100 iterates
thin = 1 # thinning for Gibbs
random_sample = rtmvn(n, Mean, Sigma, D, lower, upper, burn, thin)
# Numpy array n-by-p as result!
random_sample
########## Non-full rank problem (more constraints than dimension) ##########
Mean = np.array([0,0])
Sigma = np.array([1, 0.5, 0.5, 1]).reshape((2,2)) # bivariate normal
D = np.array([1,0,0,1,1,-1]).reshape((3,2)) # non-full rank problem
lower = np.array([-2,-1,-2])
upper = np.array([2,3,5])
n = 500 # want 500 final sample
burn = 100 # burn-in first 100 iterates
thin = 1 # thinning for Gibbs
random_sample = rtmvn(n, Mean, Sigma, D, lower, upper, burn, thin) # Numpy array n-by-p as result!
A little late I guess but for the record, you could use Hamiltonian Monte Carlo. A module in Matlab exists named HMC exact. It shouldn't be too difficult to translate in Py.

Categories

Resources