Proper Way to Fit a Lognormal Distribution with Weight in Python - python

Currently I have code to fit a lognormal distribution.
shape, loc, scale = sm.lognorm.fit(dataToLearn, floc = 0)
for b in bounds:
toPlot.append((b, currCount+sm.lognorm.ppf(b, s = shape, loc = loc, scale = scale)))
I would like to be able to pass in a vector of weights to the fitting. Currently I have a workaround, where I keep all the weights rounded to 2 decimals and then repeat each value w times so that it gets weighted properly.
for i, d in enumerate(dataToLearn):
dataToLearn2 += int(w[i] * 100) * [d]
The runtime of this is getting too slow for my computer so I was hoping for a more correct solution.
Please advise whether it be using scipy or numpy to make my workaround faster and more efficient

The SciPy distributions do not implement a weighted fit. For the log-normal distribution, however, there are explicit formulas for the (unweighted) maximum likelihood estimation, and these are easily generalized for weighted data. The explicit formulas are both (in effect) averages, and the generalization to the case of weighted data is to use weighted averages in the formulas.
Here's a script that demonstrates the calculation using a small data set with integer weights, so we know what the exact value of the fitted parameters should be.
import numpy as np
from scipy.stats import lognorm
# Sample data and weights. To enable an exact comparison with
# the method of generating an array with the values repeated
# according to their weight, I use an array of weights that is
# all integers.
x = np.array([2.5, 8.4, 9.3, 10.8, 6.8, 1.9, 2.0])
w = np.array([ 1, 1, 2, 1, 3, 3, 1])
#-----------------------------------------------------------------------------
# Fit the log-normal distribution by creating an array containing the values
# repeated according to their weight.
xx = np.repeat(x, w)
# Use the explicit formulas for the MLE of the log-normal distribution.
lnxx = np.log(xx)
muhat = np.mean(lnxx)
varhat = np.var(lnxx)
shape = np.sqrt(varhat)
scale = np.exp(muhat)
print("MLE using repeated array: shape=%7.5f scale=%7.5f" % (shape, scale))
#-----------------------------------------------------------------------------
# Use the explicit formulas for the weighted MLE of the log-normal
# distribution.
lnx = np.log(x)
muhat = np.average(lnx, weights=w)
# varhat is the weighted variance of ln(x). There isn't a function in
# numpy for the weighted variance, so we compute it using np.average.
varhat = np.average((lnx - muhat)**2, weights=w)
shape = np.sqrt(varhat)
scale = np.exp(muhat)
print("MLE using weights: shape=%7.5f scale=%7.5f" % (shape, scale))
#-----------------------------------------------------------------------------
# Might as well check that we get the same result from lognorm.fit() using the
# repeated array
shape, loc, scale = lognorm.fit(xx, floc=0)
print("MLE using lognorm.fit: shape=%7.5f scale=%7.5f" % (shape, scale))
The output is
MLE using repeated array: shape=0.70423 scale=4.57740
MLE using weights: shape=0.70423 scale=4.57740
MLE using lognorm.fit: shape=0.70423 scale=4.57740

You can use numpy.repeat to make the workaround more efficient:
import numpy as np
dataToLearn = np.array([1,2,3,4,5])
weights = np.array([1,2,1,1,3])
print(np.repeat(dataToLearn, weights))
# Output: array([1, 2, 2, 3, 4, 5, 5, 5])
Very basic performance test of numpy.repeat performance:
import timeit
code_before = """
weights = np.array([1,2,1,1,3] * 1000)
dataToLearn = np.array([1,2,3,4,5] * 1000)
dataToLearn2 = []
for i, d in enumerate(dataToLearn):
dataToLearn2 += int(weights[i]) * [d]
"""
code_after = """
weights = np.array([1,2,1,1,3] * 1000)
dataToLearn = np.array([1,2,3,4,5] * 1000)
np.repeat(dataToLearn, weights)
"""
print(timeit.timeit(code_before, setup="import numpy as np", number=1000))
print(timeit.timeit(code_after, setup="import numpy as np", number=1000))
As a result, I've got roughly 3.38 for your current approach vs 0.75 for numpy.repeat

Related

How to estimate motion with FTT and Cross-Correlation?

I'm working in the estimation of cloud displacement for wind energy purposes with RGB GOES satellital images. I find the following the methodology from this paper "An Automated Technique for Obtaining Cloud Motion From Geosynchronous Satellite Data Using Cross Correlation" to achieve it. I don't know if this is a good way to compute this. The code bassically gets the cross correlation from the Fourier Transform to calculate cloud displacement between roi_a and roi_b images.
import numpy as np
import cv2 as cv
import matplotlib.pyplot as plt
img_a = cv.imread('2019.1117.1940.goes-16.rgb.tif', 0)
img_b = cv.imread('2019.1117.1950.goes-16.rgb.tif', 0)
roi_a = img_a[700:900, 1900:2100]
roi_b = img_b[700:900, 1900:2100]
def Fou(image):
fft_roi= np.fft.fft2(image)
return fft_roi
def inv_Fou(C_w):
c_t = np.fft.ifft2(C_w)
c_t = np.abs(c_t)
return c_t
#Step 1: gets the FFT
G_t0 = Fou(roi_a)##t_0
fft_roiA_conj = np.conj(G_t0) #Conjugate
G_t1 = Fou(roi_b)##t_1
#Step 2: Compute C(m, v)
prod = np.dot(fft_roiA_conj, G_t1)
#Step 3: Perform the inverse FFT
inv = inv_Fou(prod)
plt.imshow(inv, cmap = 'gray', )
plt.title('C (m,v) --> Cov(p,q)')
plt.xticks([])
plt.yticks([])
plt.show()
#Step 4: Compute cross correlation coefficient and the maximum cross correlation coefficient
def rms(sigma):
"Compute the standar deviation of an image"
rms = np.std(sigma)
return rms
R_t = inv / (rms(roi_a) * rms(roi_b))
This is the first time that I use FFT on images, so I have some questions about it:
I don't add fftshift, is this can affect the result?
What is difference between use np.dot in step 2 and simple '*', like prod = fft_roiA_conj * G_t1
How to interpret the image result (C(m, v) -> Cov (p, q)) from step 3?
How can I obtain the maximum coefficient p' and q' (maximum coefficient of x and y directions) from R_t?
1 - fftshift is a circular rotation, if you have a two sided signal you are computing the correlation is shifted (circularly), what is important is that you map your indices to displacements correctly, with or without fftshift.
2 - numpy.dot is the matrix product (equivalent to # operator for recent python versions), and the * operator does element-wise multiplication, in my understanding you want the element-wise product at step 2.
3 - Once you correct the step 2 you will have an image such that inv[i, j] the correlation of the immage roi_a and the image roi_b rolled by i rows and j columns
To answer the last question I will workout an example.
I will use the image scipy.misc.face, it is a RGB image, so it brings three matrices that are highly correlated.
import scipy
import numpy as np
import matplotlib.pyplot as plt
f = scipy.misc.face()
plt.figure(figsize=(12, 4))
plt.subplot(131), plt.imshow(f[:,:, 0])
plt.subplot(132), plt.imshow(f[:,:, 1])
plt.subplot(133), plt.imshow(f[:,:, 2])
The function img_corrcombine the three steps of the cross correlation (for images of the same size), notice that I am use rfft2 and irfft2, this are the FFT for real data, that take advantage of symmetry in the frequency domain.
def img_corr(foi_a, foi_b):
return np.fft.irfft2(np.fft.rfft2(foi_a) * np.conj(np.fft.rfft2(foi_b)))
C = img_corr(f[:,:,1], f[:,:,2])
plt.figure(figsize=(12, 4))
plt.subplot(121), plt.imshow(C), plt.title('FFT indices')
plt.subplot(122), plt.imshow(np.fft.fftshift(C, (0, 1))), plt.title('fftshift ed version')
To retrieve the position
# this returns the indice in the vector of all pixels
best_corr = np.argmax(C)
# unravel index gives the 2D index
best_pos = np.unravel_index(best_corr, C.shape)
# this get the positions as a fraction of the image size
relative_pos = [np.fft.fftfreq(size)[index] for index, size in zip(best_pos, C.shape)]
I hope this completes the answer.

Are these functions equivalent?

I am building a neural network that makes use of T-distribution noise. I am using functions defined in the numpy library np.random.standard_t and the one defined in tensorflow tf.distributions.StudentT. The link to the documentation of the first function is here and that to the second function is here. I am using the said functions like below:
a = np.random.standard_t(df=3, size=10000) # numpy's function
t_dist = tf.distributions.StudentT(df=3.0, loc=0.0, scale=1.0)
sess = tf.Session()
b = sess.run(t_dist.sample(10000))
In the documentation provided for the Tensorflow implementation, there's a parameter called scale whose description reads
The scaling factor(s) for the distribution(s). Note that scale is not technically the standard deviation of this distribution but has semantics more similar to standard deviation than variance.
I have set scale to be 1.0 but I have no way of knowing for sure if these refer to the same distribution.
Can someone help me verify this? Thanks
I would say they are, as their sampling is defined in almost the exact same way in both cases. This is how the sampling of tf.distributions.StudentT is defined:
def _sample_n(self, n, seed=None):
# The sampling method comes from the fact that if:
# X ~ Normal(0, 1)
# Z ~ Chi2(df)
# Y = X / sqrt(Z / df)
# then:
# Y ~ StudentT(df).
seed = seed_stream.SeedStream(seed, "student_t")
shape = tf.concat([[n], self.batch_shape_tensor()], 0)
normal_sample = tf.random.normal(shape, dtype=self.dtype, seed=seed())
df = self.df * tf.ones(self.batch_shape_tensor(), dtype=self.dtype)
gamma_sample = tf.random.gamma([n],
0.5 * df,
beta=0.5,
dtype=self.dtype,
seed=seed())
samples = normal_sample * tf.math.rsqrt(gamma_sample / df)
return samples * self.scale + self.loc # Abs(scale) not wanted.
So it is a standard normal sample divided by the square root of a chi-square sample with parameter df divided by df. The chi-square sample is taken as a gamma sample with parameter 0.5 * df and rate 0.5, which is equivalent (chi-square is a special case of gamma). The scale value, like the loc, only comes into play in the last line, as a way to "relocate" the distribution sample at some point and scale. When scale is one and loc is zero, they do nothing.
Here is the implementation for np.random.standard_t:
double legacy_standard_t(aug_bitgen_t *aug_state, double df) {
double num, denom;
num = legacy_gauss(aug_state);
denom = legacy_standard_gamma(aug_state, df / 2);
return sqrt(df / 2) * num / sqrt(denom);
})
So essentially the same thing, slightly rephrased. Here we have also have a gamma with shape df / 2 but it is standard (rate one). However, the missing 0.5 is now by the numerator as / 2 within the sqrt. So it's just moving the numbers around. Here there is no scale or loc, though.
In truth, the difference is that in the case of TensorFlow the distribution really is a noncentral t-distribution. A simple empirical proof that they are the same for loc=0.0 and scale=1.0 is to plot histograms for both distributions and see how close they look.
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
np.random.seed(0)
t_np = np.random.standard_t(df=3, size=10000)
with tf.Graph().as_default(), tf.Session() as sess:
tf.random.set_random_seed(0)
t_dist = tf.distributions.StudentT(df=3.0, loc=0.0, scale=1.0)
t_tf = sess.run(t_dist.sample(10000))
plt.hist((t_np, t_tf), np.linspace(-10, 10, 20), label=['NumPy', 'TensorFlow'])
plt.legend()
plt.tight_layout()
plt.show()
Output:
That looks pretty close. Obviously, from the point of view of statistical samples, this is not any kind of proof. If you were not still convinced, there are some statistical tools for testing whether a sample comes from a certain distribution or two samples come from the same distribution.

Python: Kernel Density Estimation for positive values

I want to get kernel density estimation for positive data points. Using Python Scipy Stats package, I came up with the following code.
def get_pdf(data):
a = np.array(data)
ag = st.gaussian_kde(a)
x = np.linspace(0, max(data), max(data))
y = ag(x)
return x, y
This works perfectly for most data sets, but it gives an erroneous result for "all positive" data points. To make sure this works correctly, I use numerical integration to compute the area under this curve.
def trapezoidal_2(ag, a, b, n):
h = np.float(b - a) / n
s = 0.0
s += ag(a)[0]/2.0
for i in range(1, n):
s += ag(a + i*h)[0]
s += ag(b)[0]/2.0
return s * h
Since the data is spread in the region (0, int(max(data))), we should get a value close to 1, when executing the following line.
b = 1
data = st.pareto.rvs(b, size=10000)
data = list(data)
a = np.array(data)
ag = st.gaussian_kde(a)
trapezoidal_2(ag, 0, int(max(data)), int(max(data))*2)
But it gives a value close to 0.5 when I test.
But when I intergrate from -100 to max(data), it provides a value close to 1.
trapezoidal_2(ag, -100, int(max(data)), int(max(data))*2+200)
The reason is, ag (KDE) is defined for values less than 0, even though the original data set contains only positive values.
So how can I get a kernel density estimation that considers only positive values, such that area under the curve in the region (o, max(data)) is close to 1?
The choice of the bandwidth is quite important when performing kernel density estimation. I think the Scott's Rule and Silverman's Rule work well for distribution similar to a Gaussian. However, they do not work well for the Pareto distribution.
Quote from the doc:
Bandwidth selection strongly influences the estimate obtained from
the KDE (much more so than the actual shape of the kernel). Bandwidth selection
can be done by a "rule of thumb", by cross-validation, by "plug-in
methods" or by other means; see [3], [4] for reviews. gaussian_kde
uses a rule of thumb, the default is Scott's Rule.
Try with different bandwidth values, for example:
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
b = 1
sample = stats.pareto.rvs(b, size=3000)
kde_sample_scott = stats.gaussian_kde(sample, bw_method='scott')
kde_sample_scalar = stats.gaussian_kde(sample, bw_method=1e-3)
# Compute the integrale:
print('integrale scott:', kde_sample_scott.integrate_box_1d(0, np.inf))
print('integrale scalar:', kde_sample_scalar.integrate_box_1d(0, np.inf))
# Graph:
x_span = np.logspace(-2, 1, 550)
plt.plot(x_span, stats.pareto.pdf(x_span, b), label='theoretical pdf')
plt.plot(x_span, kde_sample_scott(x_span), label="estimated pdf 'scott'")
plt.plot(x_span, kde_sample_scalar(x_span), label="estimated pdf 'scalar'")
plt.xlabel('X'); plt.legend();
gives:
integrale scott: 0.5572130540733236
integrale scalar: 0.9999999999968957
and:
We see that the kde using the Scott method is wrong.

Weighted Least Squares in Statsmodels vs. Numpy?

I am trying to replicate the functionality of Statsmodels's weight least squares (WLS) function with Numpy's ordinary least squares (OLS) function (i.e. Numpy refers to OLS as just "least squares").
In other words, I want to compute the WLS in Numpy. I used this Stackoverflow post as reference, but drastically different R² values arise moving from Statsmodel to Numpy.
Take the following example code that replicates this:
import numpy as np
import statsmodels.formula.api as smf
import pandas as pd
# Test Data
patsy_equation = "y ~ C(x) - 1" # Use minus one to get ride of hidden intercept of "+ 1"
weight = np.array([0.37, 0.37, 0.53, 0.754])
y = np.array([0.23, 0.55, 0.66, 0.88])
x = np.array([3, 3, 3, 3])
d = {"x": x.tolist(), "y": y.tolist()}
data_df = pd.DataFrame(data=d)
# Weighted Least Squares from Statsmodel API
statsmodel_model = smf.wls(formula=patsy_equation, weights=weight, data=data_df)
statsmodel_r2 = statsmodel_model.fit().rsquared
# Weighted Least Squares from Numpy API
Aw = x.reshape((-1, 1)) * np.sqrt(weight[:, np.newaxis]) # Multiply two column vectors
Bw = y * np.sqrt(weight)
numpy_model, numpy_resid = np.linalg.lstsq(Aw, Bw, rcond=None)[:2]
numpy_r2 = 1 - numpy_resid / (Bw.size * Bw.var())
print("Statsmodels R²: " + str(statsmodel_r2))
print("Numpy R²: " + str(numpy_r2[0]))
After running such code, I get the following results:
Statsmodels R²: 2.220446049250313e-16
Numpy R²: 0.475486515775414
Clearly something is wrong here! Can anyone point out my flaws here? Am I miss understanding the patsy formula?

PyMC multiple linear regressions

I'm trying to fit several lines sharing the same intercept.
import numpy as np
import pymc
# Observations
a_actual = np.array([[2., 5., 7.]]).T
b_actual = 3.
t = np.arange(100)
obs = np.random.normal(a_actual * t + b_actual)
# PyMC Model
def model_linear():
b = pymc.Uniform('b', value=1., lower=0, upper=200)
a = []
s = []
r = []
for i in range(len(a_actual)):
s.append(pymc.Uniform('sigma_{}'.format(i), value=1., lower=0, upper=100))
a.append(pymc.Uniform('a_{}'.format(i), value=1., lower=0, upper=200))
r.append(pymc.Normal('r_{}'.format(i), mu=a[i] * t + b, tau=1/s[i]**2, value=obs[i], observed=True))
return [pymc.Container(a), b, pymc.Container(s), pymc.Container(r)]
model = pymc.Model(model_linear())
map = pymc.MAP(model)
map.fit()
map.revert_to_max()
The computed MAP estimates are far from the actual values. Those values are also very sensitive to the lower and upper bounds of sigmas and a, to the actual values of a (e.g. a = [.2, .5, .7] will give me good estimates) or to the number of lines to do the regression on.
Is this the right way of performing my linear regressions?
ps : I tried to use an Exponential prior distribution for sigmas but results were not better.
I think using MAP might not be your best bet. If you are able to do a proper sampling then consider replacing the last 3 lines of your example code with
MCMClinear = pymc.MCMC( model)
MCMClinear.sample(10000,burn=5000,thin=5)
linear_output=MCMClinear.stats()
Printing the linear_output for this gives very accurate inferences for the parameters.

Categories

Resources