I want to modify the following NumPyro model:
import jax.numpy as jnp
from jax import random, vmap
import numpy as np
import numpyro
import numpyro.distributions as dist
from numpyro.infer import MCMC, NUTS
numpyro.set_host_device_count(6)
def model(y=None, X=None):
n_predictors = X.shape[1]
with numpyro.plate('state', n_predictors):
theta = numpyro.sample('theta', dist.Gamma(concentration=1, rate=1/5000))
mu = jnp.dot(X, theta)
numpyro.sample('y', dist.Normal(loc=mu, scale=1), obs=y)
theta = np.zeros(5) # True parameters
theta[0] = 2
theta[1] = 3
X = np.random.randn(20, theta.size)**2 # Design matrix
y = X # theta + np.random.randn(X.shape[0]) # data
rng_key = random.PRNGKey(74674)
rng_key, rng_key_ = random.split(rng_key)
mcmc = MCMC(NUTS(model), num_warmup=500, num_samples=1000, num_chains=6)
mcmc.run(rng_key_, X=X, y=y)
mcmc.print_summary()
I want to include Bernoulli RVs z that choose which theta is active. Then I would like to make inference for these z. Basically, I am trying to do variable selection. The idea is illustrated in the following model (which fails):
def failed_model(y=None, X=None):
n_predictors = X.shape[1]
with numpyro.plate('state', n_predictors):
theta = numpyro.sample('theta', dist.Gamma(concentration=1, rate=1/5000))
z = numpyro.sample('z', dist.Bernoulli(0.1)
mu = jnp.dot(X, theta * z)
numpyro.sample('y', dist.Normal(loc=mu, scale=1), obs=y)
I tried to understand the second example from the [docs][1], but it does not show masking for a random array, but rather for a fixed array.
[1]: https://num.pyro.ai/en/stable/distributions.html
I wrote a simple class in Python to compute spherical harmonic basis functions and corresponding coefficients for a function defined on a sphere. See below.
import numpy as np
from scipy.special import sph_harm
import scipy.integrate as integrate
class SphHarmBasis():
def __init__(self, n_coeffs=16):
self._n_coeffs = n_coeffs
self.basis = self.sph_harm_basis()
''' Compute real spherical harmonic basis function '''
def get_sph_harm_function(self, l, m):
def basis_function(theta, phi):
Y = sph_harm(abs(m), l, phi, theta)
if m < 0:
Y = np.sqrt(2) * (-1)**m * Y.imag
elif m > 0:
Y = np.sqrt(2) * (-1)**m * Y.real
return Y.real
return basis_function
''' Get a specified number of basis functions '''
def sph_harm_basis(self):
basis_functions = []
dimension = 0
l, m = 0, 0
while dimension < self._n_coeffs:
while m <= l:
basis_functions.append(self.get_sph_harm_function(l, m))
m += 1
dimension += 1
l += 1
m = -l
return basis_functions
''' Compute spherical harmonic coefficients '''
def sph_harm_coeff(self, Y, f):
def integrand(phi, theta):
return f(theta, phi) * Y(theta, phi) * np.sin(theta)
return integrate.dblquad(integrand, 0., np.pi, lambda x : 0., lambda x : 2*np.pi)[0]
''' Get spherical harmonic coefficients for a function in a basis '''
def sph_harm_transform(self, f, basis=None):
if basis is None:
basis = self.basis
coeffs = []
for Y in basis:
coeffs.append(self.sph_harm_coeff(Y, f))
return coeffs
''' Reconstruct a function from basis and corresponding coefficients'''
def sph_harm_reconstruct(self, coeffs, basis=None):
if basis is None:
basis = self.basis
return lambda theta, phi : np.dot(coeffs, [f(theta, phi) for f in basis])
And you can use it like this:
def my_sphere_function(theta, phi):
return np.sin(theta+phi)
my_basis = SphHarmBasis(n_coeffs=25)
# encode your function in the desired basis and record the coefficients
my_coeffs = my_basis.sph_harm_transform(my_sphere_function)
# reconstruct your function at a point
point = (np.pi, np.pi/8)
my_basis.sph_harm_reconstruct(my_coeffs)(*point), my_sphere_function(*point)
My problem is that it doesn't seem to be very accurate.
For example, running the follow test code reports a mean absolute error of 0.4933463836715332.
import matplotlib.pyplot as plt
my_reconstr_function = my_basis.sph_harm_reconstruct(my_coeffs)
pts = np.linspace(0, 2*np.pi)
pts_2d = np.reshape(np.stack(np.meshgrid(pts,pts), axis=-1), (-1, 2))
actual = []
approx = []
pts_total = 0
for n, pt in enumerate(pts_2d):
f_actual = my_sphere_function(*pt)
f_approx = my_reconstr_function(*pt)
actual.append(f_actual)
approx.append(f_approx)
pts_total += abs(f_approx - f_actual)
print(pts_total / len(pts_2d))
plt.subplot(121)
plt.imshow(np.reshape(np.repeat(actual, 3), (50, 50, 3)))
plt.subplot(122)
plt.imshow(np.reshape(np.repeat(approx, 3), (50, 50, 3)))
Actual vs. reconstructed images
On the left is an image representation of the actual function, and on the right is the reconstructed function.
Where is my error? Changing the number of coefficients doesn't seem to affect much.
My problem was that I was testing samples out of the domain of the spherical harmonic basis functions. I specified the correct integration bounds, as the polar angle must be in the interval [0, pi] while the azimuthal belongs to [0, 2pi]. But for my test example, pts_2d was in the interval [0, 2pi]^2. All is well!
Reconstruction comparison after fixing the test code
I am trying to maximize a target function f(x) with function scipy.optimize.minimum. But it usually takes 4-5 hrs to run the code because the function f(x) involves a lot of computation of complex matrix. To improve its speed, I want to use gpu. And I've already tried tensorflow package. Since I use numpy to define f(x), I have to convert it into tensorflow's format. However, it doesn't support the computation of complex matrix. What else package or means I can use? Any suggestions?
To specific my problem, I will show calculate scheme below:
Calculate the expectation :
-where H=x*H_0, x is the parameter
Let \phi go through the dynamics of Schrödinger equation
-Different H is correspond to a different \phi_end. Thus, parameter x determines the expectation
Change x, calculate the corresponding expectation
Find a specific x that minimize the expectation
Here is a simple example of part of my code:
import numpy as np
import cmath
from scipy.linalg import expm
import scipy.optimize as opt
# create initial complex matrixes
N = 2 # Dimension of matrix
H = np.array([[1.0 + 1.0j] * N] * N) # a complex matrix with shape(N, N)
A = np.array([[0.0j] * N] * N)
A[0][0] = 1.0 + 1j
# calculate the expectation
def value(phi):
exp_H = expm(H) # put the matrix in the exp function
new_phi = np.linalg.linalg.matmul(exp_H, phi)
# calculate the expectation of the matrix
x = np.linalg.linalg.matmul(H, new_phi)
expectation = np.inner(np.conj(phi), x)
return expectation
# Contants
tmax = 1
dt = 0.1
nstep = int(tmax/dt)
phi_init = [1.0 + 1.0j] * N
# 1st derivative of Schrödinger equation
def dXdt(t, phi, H): # 1st derivative of the function
return -1j * np.linalg.linalg.matmul(H, phi)
def f(X):
phi = [[0j] * N] * nstep # store every time's phi
phi[0] = phi_init
# phi go through the dynamics of Schrödinger equation
for i in range(nstep - 1):
phi[i + 1] = phi[i] - dXdt(i * dt, X[i] * H, phi[i]) * dt
# calculate the corresponding value
f_result = value(phi[-1])
return f_result
# Initialize the parameter
X0 = np.array(np.ones(nstep))
results = opt.minimize(f, X0) # minimize the target function
opt_x = results.x
PS:
Python Version: 3.7
Operation System: Win 10
I am trying to apply an exponential fit to my data to determine the point at which the value drops by 1/e. When plotted, the fit seems to favor smaller values and does not portray the true relationship.
import numpy as np
import matplotlib
matplotlib.use("TkAgg") # need to set the TkAgg backend explicitly otherwise it introduced a low-level error
from matplotlib import pyplot as plt
import scipy as sc
def autoCorrelation(sample, longTime, temp, plotTau = False ):
# compute empirical autocovariance with lag tau averaged over time longTime
sample.takeTimeStep(timesteps=1500) # 1500 timesteps to let sample reach equilibrium
M = np.zeros(longTime)
for tau in range(longTime):
M[tau] = sample.calcMagnetisation()
sample.takeTimeStep()
M_ave = np.average(M) #time - average
M = (M - M_ave)
autocorrelation = np.correlate(M, M, mode='full')
autocorrelation /= autocorrelation.max() # normalise such that max autocorrelation is 1
autocorrelationArray = autocorrelation[int(len(autocorrelation)/2):]
x = np.arange(0, len(autocorrelationArray), 1)
# apply exponential fit
def exponenial(x, a, b):
return a * np.exp(-b * x)
popt, pcov = curve_fit(exponenial, x, np.absolute(autocorrelationArray)) # array, 2d array
yy = exponenial(x, *popt)
plt.plot(x, np.absolute(autocorrelationArray), 'o', x, yy)
plt.title('Exponential Fit of Magnetisation Autocorrelation against Time for Temperature = ' + str(T) + ' J/k')
plt.xlabel('Time / Number of Iterations ')
plt.ylabel('Magnetisation Autocorrelation')
plt.show()
# prints tau_e value b from exponential a * np.exp(-b * x)
print('tau_e is ' + str(1/popt[1])) # units converted to time steps by taking reciprocal
if __name__ == '__main__':
#plot autocorrelation against time
longTime = 100
temp = [1, 2, 2.3, 2.6, 3, 4]
for T in temp:
magnet = Ising(30, T) # (N, temp)
autoCorrelation(magnet, longTime, temp)
Note: Ising is a class in another .py file containing the functions takeTimeStep and calcMagnetisation.
Expect greater values of tau_e
My knowledge of maths is limited which is why I am probably stuck. I have a spectra to which I am trying to fit two Gaussian peaks. I can fit to the largest peak, but I cannot fit to the smallest peak. I understand that I need to sum the Gaussian function for the two peaks but I do not know where I have gone wrong. An image of my current output is shown:
The blue line is my data and the green line is my current fit. There is a shoulder to the left of the main peak in my data which I am currently trying to fit, using the following code:
import matplotlib.pyplot as pt
import numpy as np
from scipy.optimize import leastsq
from pylab import *
time = []
counts = []
for i in open('/some/folder/to/file.txt', 'r'):
segs = i.split()
time.append(float(segs[0]))
counts.append(segs[1])
time_array = arange(len(time), dtype=float)
counts_array = arange(len(counts))
time_array[0:] = time
counts_array[0:] = counts
def model(time_array0, coeffs0):
a = coeffs0[0] + coeffs0[1] * np.exp( - ((time_array0-coeffs0[2])/coeffs0[3])**2 )
b = coeffs0[4] + coeffs0[5] * np.exp( - ((time_array0-coeffs0[6])/coeffs0[7])**2 )
c = a+b
return c
def residuals(coeffs, counts_array, time_array):
return counts_array - model(time_array, coeffs)
# 0 = baseline, 1 = amplitude, 2 = centre, 3 = width
peak1 = np.array([0,6337,16.2,4.47,0,2300,13.5,2], dtype=float)
#peak2 = np.array([0,2300,13.5,2], dtype=float)
x, flag = leastsq(residuals, peak1, args=(counts_array, time_array))
#z, flag = leastsq(residuals, peak2, args=(counts_array, time_array))
plt.plot(time_array, counts_array)
plt.plot(time_array, model(time_array, x), color = 'g')
#plt.plot(time_array, model(time_array, z), color = 'r')
plt.show()
This code worked for me providing that you are only fitting a function that is a combination of two Gaussian distributions.
I just made a residuals function that adds two Gaussian functions and then subtracts them from the real data.
The parameters (p) that I passed to Numpy's least squares function include: the mean of the first Gaussian function (m), the difference in the mean from the first and second Gaussian functions (dm, i.e. the horizontal shift), the standard deviation of the first (sd1), and the standard deviation of the second (sd2).
import numpy as np
from scipy.optimize import leastsq
import matplotlib.pyplot as plt
######################################
# Setting up test data
def norm(x, mean, sd):
norm = []
for i in range(x.size):
norm += [1.0/(sd*np.sqrt(2*np.pi))*np.exp(-(x[i] - mean)**2/(2*sd**2))]
return np.array(norm)
mean1, mean2 = 0, -2
std1, std2 = 0.5, 1
x = np.linspace(-20, 20, 500)
y_real = norm(x, mean1, std1) + norm(x, mean2, std2)
######################################
# Solving
m, dm, sd1, sd2 = [5, 10, 1, 1]
p = [m, dm, sd1, sd2] # Initial guesses for leastsq
y_init = norm(x, m, sd1) + norm(x, m + dm, sd2) # For final comparison plot
def res(p, y, x):
m, dm, sd1, sd2 = p
m1 = m
m2 = m1 + dm
y_fit = norm(x, m1, sd1) + norm(x, m2, sd2)
err = y - y_fit
return err
plsq = leastsq(res, p, args = (y_real, x))
y_est = norm(x, plsq[0][0], plsq[0][2]) + norm(x, plsq[0][0] + plsq[0][1], plsq[0][3])
plt.plot(x, y_real, label='Real Data')
plt.plot(x, y_init, 'r.', label='Starting Guess')
plt.plot(x, y_est, 'g.', label='Fitted')
plt.legend()
plt.show()
You can use Gaussian mixture models from scikit-learn:
from sklearn import mixture
import matplotlib.pyplot
import matplotlib.mlab
import numpy as np
clf = mixture.GMM(n_components=2, covariance_type='full')
clf.fit(yourdata)
m1, m2 = clf.means_
w1, w2 = clf.weights_
c1, c2 = clf.covars_
histdist = matplotlib.pyplot.hist(yourdata, 100, normed=True)
plotgauss1 = lambda x: plot(x,w1*matplotlib.mlab.normpdf(x,m1,np.sqrt(c1))[0], linewidth=3)
plotgauss2 = lambda x: plot(x,w2*matplotlib.mlab.normpdf(x,m2,np.sqrt(c2))[0], linewidth=3)
plotgauss1(histdist[1])
plotgauss2(histdist[1])
You can also use the function below to fit the number of Gaussian you want with ncomp parameter:
from sklearn import mixture
%pylab
def fit_mixture(data, ncomp=2, doplot=False):
clf = mixture.GMM(n_components=ncomp, covariance_type='full')
clf.fit(data)
ml = clf.means_
wl = clf.weights_
cl = clf.covars_
ms = [m[0] for m in ml]
cs = [numpy.sqrt(c[0][0]) for c in cl]
ws = [w for w in wl]
if doplot == True:
histo = hist(data, 200, normed=True)
for w, m, c in zip(ws, ms, cs):
plot(histo[1],w*matplotlib.mlab.normpdf(histo[1],m,np.sqrt(c)), linewidth=3)
return ms, cs, ws
coeffs 0 and 4 are degenerate - there is absolutely nothing in the data that can decide between them. you should use a single zero level parameter instead of two (ie remove one of them from your code). this is probably what is stopping your fit (ignore the comments here saying this is not possible - there are clearly at least two peaks in that data and you should certainly be able to fit to that).
(it may not be clear why i am suggesting this, but what is happening is that coeffs 0 and 4 can cancel each other out. they can both be zero, or one could be 100 and the other -100 - either way, the fit is just as good. this "confuses" the fitting routine, which spends its time trying to work out what they should be, when there is no single right answer, because whatever value one is, the other can just be the negative of that, and the fit will be the same).
in fact, from the plot, it looks like there may be no need for a zero level at all. i would try dropping both of those and seeing how the fit looks.
also, there is no need to fit coeffs 1 and 5 (or the zero point) in the least squares. instead, because the model is linear in those you could calculate their values each loop. this will make things faster, but is not critical. i just noticed you say your maths is not so good, so probably ignore this one.