I am looking for a Python function (or to write my own if there is not one) to get the t-statistic in order to use in a confidence interval calculation.
I have found tables that give answers for various probabilities / degrees of freedom like this one, but I would like to be able to calculate this for any given probability. For anyone not already familiar with this degrees of freedom is the number of data points (n) in your sample -1 and the numbers for the column headings at the top are probabilities (p) e.g. a 2 tailed significance level of 0.05 is used if you are looking up the t-score to use in the calculation for 95% confidence that if you repeated n tests the result would fall within the mean +/- the confidence interval.
I have looked into using various functions within scipy.stats, but none that I can see seem to allow for the simple inputs I described above.
Excel has a simple implementation of this e.g. to get the t-score for a sample of 1000, where I need to be 95% confident I would use: =TINV(0.05,999) and get the score ~1.96
Here is the code that I have used to implement confidence intervals so far, as you can see I am using a very crude way of getting the t-score at present (just allowing a few values for perc_conf and warning that it is not accurate for samples < 1000):
# -*- coding: utf-8 -*-
from __future__ import division
import math
def mean(lst):
# μ = 1/N Σ(xi)
return sum(lst) / float(len(lst))
def variance(lst):
"""
Uses standard variance formula (sum of each (data point - mean) squared)
all divided by number of data points
"""
# σ² = 1/N Σ((xi-μ)²)
mu = mean(lst)
return 1.0/len(lst) * sum([(i-mu)**2 for i in lst])
def conf_int(lst, perc_conf=95):
"""
Confidence interval - given a list of values compute the square root of
the variance of the list (v) divided by the number of entries (n)
multiplied by a constant factor of (c). This means that I can
be confident of a result +/- this amount from the mean.
The constant factor can be looked up from a table, for 95% confidence
on a reasonable size sample (>=500) 1.96 is used.
"""
if perc_conf == 95:
c = 1.96
elif perc_conf == 90:
c = 1.64
elif perc_conf == 99:
c = 2.58
else:
c = 1.96
print 'Only 90, 95 or 99 % are allowed for, using default 95%'
n, v = len(lst), variance(lst)
if n < 1000:
print 'WARNING: constant factor may not be accurate for n < ~1000'
return math.sqrt(v/n) * c
Here is an example call for the above code:
# Example: 1000 coin tosses on a fair coin. What is the range that I can be 95%
# confident the result will f all within.
# list of 1000 perfectly distributed...
perc_conf_req = 95
n, p = 1000, 0.5 # sample_size, probability of heads for each coin
l = [0 for i in range(int(n*(1-p)))] + [1 for j in range(int(n*p))]
exp_heads = mean(l) * len(l)
c_int = conf_int(l, perc_conf_req)
print 'I can be '+str(perc_conf_req)+'% confident that the result of '+str(n)+ \
' coin flips will be within +/- '+str(round(c_int*100,2))+'% of '+\
str(int(exp_heads))
x = round(n*c_int,0)
print 'i.e. between '+str(int(exp_heads-x))+' and '+str(int(exp_heads+x))+\
' heads (assuming a probability of '+str(p)+' for each flip).'
The output for this is:
I can be 95% confident that the result of 1000 coin flips will be
within +/- 3.1% of 500 i.e. between 469 and 531 heads (assuming a
probability of 0.5 for each flip).
I also looked into calculating the t-distribution for a range and then returning the t-score that got the probability closest to that required, but I had issues implementing the formula. Let me know if this is relevant and you want to see the code, but I have assumed not as there is probably an easier way.
Have you tried scipy?
You will need to installl the scipy library...more about installing it here: http://www.scipy.org/install.html
Once installed, you can replicate the Excel functionality like such:
from scipy import stats
#Studnt, n=999, p<0.05, 2-tail
#equivalent to Excel TINV(0.05,999)
print stats.t.ppf(1-0.025, 999)
#Studnt, n=999, p<0.05%, Single tail
#equivalent to Excel TINV(2*0.05,999)
print stats.t.ppf(1-0.05, 999)
You can also read about installing the library here: how to install scipy for python?
Try the following code:
from scipy import stats
#Studnt, n=22, 2-tail
#stats.t.ppf(1-0.025, df)
# df=n-1=22-1=21
print (stats.t.ppf(1-0.025, 21))
scipy.stats.t has another method isf that directly returns the quantile that corresponds to the upper tail probability alpha. This is an implementation of the inverse survival function and returns the exact same value as t.ppf(1-alpha, dof).
from scipy import stats
alpha, dof = 0.05, 999
stats.t.isf(alpha, dof)
# 1.6463803454275356
For two-tailed, halve alpha:
stats.t.isf(alpha/2, dof)
# 1.962341461133449
You can try this code:
# for small samples (<50) we use t-statistics
# n = 9, degree of freedom = 9-1 = 8
# for 99% confidence interval, alpha = 1% = 0.01 and alpha/2 = 0.005
from scipy import stats
ci = 99
n = 9
t = stats.t.ppf(1- ((100-ci)/2/100), n-1) # 99% CI, t8,0.005
print(t) # 3.36
Related
I'm looking to create a random function where 100 is the rarest number while 1 is the most common. It should be a linear distribution so for example the function returning 100 is the lowest chance, then 99 is the second lowest, then 98 is the third lowest and so forth. I've tried this code below:
def getPercentContent():
minPercent = 5
maxPercent = 102 # this will return 101 as highest number
power = 1.5 # higher number, more concentration to lower numbers
num = math.floor(minPercent+(maxPercent-minPercent)*random.random()**power)
return str(num)
This does return a lot more low numbers through 1- 10, but after that, since its an exponential function, numbers 10-100 have a very similar count.
Is there any way to create a linear distribution like below:
For the distribution you're looking for, simply generate two numbers in the range and take their minimum. Here is an example:
min(random.randint(minPercent,maxPercent-1),
random.randint(minPercent,maxPercent-1))
You just need to return an inverse of whatever number you're inputting and then normalise all values by the sum of all frequencies to get percentages:
def frequency(start: int = 0, end: int = 100):
freq = [1/n for n in range(start, stop)]
perc = [k/sum(freq) for k in freq]
return perc
What you're describing is a triangular distribution with its mode (most frequently occurring value) equal to the min. These are built-in as a continuous distribution in the random module or (if you want a lot of them fast) in numpy. If you want integer outcomes from 1 to 100, inclusive, generate with min = mode = 0 and max = 100, take the floor, and add 1 to the result.
The following code generates and plots a million triangular values in half a second on my laptop:
from numpy.random import default_rng
import matplotlib.pyplot as plt
rng = default_rng()
data = rng.triangular(0, 0, 100, size = 1000000).astype(int) + 1
h = plt.hist(data, bins=100, density=True)
plt.show()
Sample output:
I have written a Monte Carlo program to integrate a function f(x).
I have now been asked to calculate the percentage error.
Having done a quick literature search, I found that this can be given with the equation %error = (sqrt(var[f(x)]/n))*100, where n is the number of random points I used to derive my answer.
However, when I run my integration code, my percentage error is greater than that given by this formula.
Do I have the correct formula?
Any help would be greatly appreciated. Thanks x
Here is quick example - estimate integral of linear function on the interval [0...1] using Monte-Carlo. To estimate error you have to collect second momentum (values squared), then compute variance, standard deviation, and (assuming CLT), error of the simulation in the original units as well as in %
Code, Python 3.7, Anaconda, Win10 64x
import numpy as np
def f(x): # linear function to integrate
return x
np.random.seed(312345)
N = 100000
x = np.random.random(N)
q = f(x) # first momentum
q2 = q*q # second momentum
mean = np.sum(q) / float(N) # compute mean explicitly, not using np.mean
var = np.sum(q2) / float(N) - mean * mean # variance as E[X^2] - E[X]^2
sd = np.sqrt(var) # std.deviation
print(mean) # should be 1/2
print(var) # should be 1/12
print(sd) # should be 0.5/sqrt(3)
print("-----------------------------------------------------")
sigma = sd / np.sqrt(float(N)) # assuming CLT, error estimation in original units
print("result = {0} with error +- {1}".format(mean, sigma))
err_pct = sigma / mean * 100.0 # error estimate in percents
print("result = {0} with error +- {1}%".format(mean, err_pct))
Be aware, that we computed one sigma error and (even not talking about it being random value itself) true result is within printed mean+-error only for 68% of the runs. You could print mean+-2*error, and it would mean true result is inside that region for 95% cases, mean+-3*error true result is inside that region for 99.7% of the runs and so on and so forth.
UPDATE
For sampling variance estimate, there is known problem called Bias in the estimator. Basically, we underestimate a bit sampling variance, proper correction (Bessel's correction) shall be applied
var = np.sum(q2) / float(N) - mean * mean # variance as E[X^2] - E[X]^2
var *= float(N)/float(N-1)
In many cases (and many examples) it is omitted because N is very large, which makes correction pretty much invisible - f.e., if you have statistical error 1% but N is in millions, correction is of no practical use.
I tried using the random.gauss function in Python to get a random value between a range. The gauss function takes mean, standard deviation as parameters. Shouldn't the return value be between [mean +- standard deviation]?
Below is the code snippet:
for y in [random.gauss(4,1) for _ in range(50)]:
if y > 5 or y < 3: # shouldn't y be between (3, 5) ?
print(y)
Output of the code:
6.011096878888296
2.9192195126660403
5.020299287583643
2.9322959456674083
1.6704559841869528
"Shouldn't the return value be between [mean +- standard deviation]?"
No. For the Gaussian distribution slightly less than 32% of the outcomes will be more than one standard deviation away from the mean in either direction. There is no fixed range that contains 100% of the outcomes.
Gaussian distribution only concern about the probability of a value to deviate from the mean by a certain amount. So, any value is possible, it just that the probability is very low for some range of values
A sample size of 50 is not enough for the sample mean and sample SD to be good estimators of the population mean/SD. Law of large numbers to the rescue:
from random import gauss
xs = []
for i in range(100000):
xs.append(gauss(0,1))
print("mean = ", sum(xs)/len(xs)) # should print a number close to 0
in_sd = len(list(x for x in xs if x > -1 and x < 1))
print("in SD = ", in_sd/len(xs)) # should print a number close to 0.68
(https://repl.it/#millimoose/SiennaFlatBackground)
To get random numbers between 3 and 5, use random.uniform(3, 5) instead.
I want to specify the probability density function of a distribution and then pick up N random numbers from that distribution in Python. How do I go about doing that?
In general, you want to have the inverse cumulative probability density function. Once you have that, then generating the random numbers along the distribution is simple:
import random
def sample(n):
return [ icdf(random.random()) for _ in range(n) ]
Or, if you use NumPy:
import numpy as np
def sample(n):
return icdf(np.random.random(n))
In both cases icdf is the inverse cumulative distribution function which accepts a value between 0 and 1 and outputs the corresponding value from the distribution.
To illustrate the nature of icdf, we'll take a simple uniform distribution between values 10 and 12 as an example:
probability distribution function is 0.5 between 10 and 12, zero elsewhere
cumulative distribution function is 0 below 10 (no samples below 10), 1 above 12 (no samples above 12) and increases linearly between the values (integral of the PDF)
inverse cumulative distribution function is only defined between 0 and 1. At 0 it is 10, at 12 it is 1, and changes linearly between the values
Of course, the difficult part is obtaining the inverse cumulative density function. It really depends on your distribution, sometimes you may have an analytical function, sometimes you may want to resort to interpolation. Numerical methods may be useful, as numerical integration can be used to create the CDF and interpolation can be used to invert it.
This is my function to retrieve a single random number distributed according to the given probability density function. I used a Monte-Carlo like approach. Of course n random numbers can be generated by calling this function n times.
"""
Draws a random number from given probability density function.
Parameters
----------
pdf -- the function pointer to a probability density function of form P = pdf(x)
interval -- the resulting random number is restricted to this interval
pdfmax -- the maximum of the probability density function
integers -- boolean, indicating if the result is desired as integer
max_iterations -- maximum number of 'tries' to find a combination of random numbers (rand_x, rand_y) located below the function value calc_y = pdf(rand_x).
returns a single random number according the pdf distribution.
"""
def draw_random_number_from_pdf(pdf, interval, pdfmax = 1, integers = False, max_iterations = 10000):
for i in range(max_iterations):
if integers == True:
rand_x = np.random.randint(interval[0], interval[1])
else:
rand_x = (interval[1] - interval[0]) * np.random.random(1) + interval[0] #(b - a) * random_sample() + a
rand_y = pdfmax * np.random.random(1)
calc_y = pdf(rand_x)
if(rand_y <= calc_y ):
return rand_x
raise Exception("Could not find a matching random number within pdf in " + max_iterations + " iterations.")
In my opinion this solution is performing better than other solutions if you do not have to retrieve a very large number of random variables. Another benefit is that you only need the PDF and avoid calculating the CDF, inverse CDF or weights.
I am trying to write code to produce confidence intervals for the number of different books in a library (as well as produce an informative plot).
My cousin is at elementary school and every week is given a book by his teacher. He then reads it and returns it in time to get another one the next week. After a while we started noticing that he was getting books he had read before and this became gradually more common over time.
Say the true number of books in the library is N and the teacher picks one uniformly at random (with replacement) to give to you each week. If at week t the number of occasions on which you have received a book you have read is x, then I can produce a maximum likelihood estimate for the number of books in the library following https://math.stackexchange.com/questions/615464/how-many-books-are-in-a-library .
Example: Consider a library with five books A, B, C, D, and E. If you receive books [A, B, A, C, B, B, D] in seven successive weeks, then the value for x (the number of duplicates) will be [0, 0, 1, 1, 2, 3, 3] after each of those weeks, meaning after seven weeks, you have received a book you have already read on three occasions.
To visualise the likelihood function (assuming I have understood what one is correctly) I have written the following code which I believe plots the likelihood function. The maximum is around 135 which is indeed the maximum likelihood estimate according to the MSE link above.
from __future__ import division
import random
import matplotlib.pyplot as plt
import numpy as np
#N is the true number of books. t is the number of weeks.unk is the true number of repeats found
t = 30
unk = 3
def numberrepeats(N, t):
return t - len(set([random.randint(0,N) for i in xrange(t)]))
iters = 1000
ydata = []
for N in xrange(10,500):
sampledunk = [numberrepeats(N,t) for i in xrange(iters)].count(unk)
ydata.append(sampledunk/iters)
print "MLE is", np.argmax(ydata)
xdata = range(10, 500)
print len(xdata), len(ydata)
plt.plot(xdata,ydata)
plt.show()
The output looks like
My questions are these:
Is there an easy way to get a 95% confidence interval and plot it on the diagram?
How can you superimpose a smoothed curve over the plot?
Is there a better way my code should have been written? It isn't very elegant and is also quite slow.
Finding the 95% confidence interval means finding the range of the x axis so that 95% of the time the empirical maximum likelihood estimate we get by sampling (which should theoretically be 135 in this example) will fall within it. The answer #mbatchkarov has given does not currently do this correctly.
There is now a mathematical answer at https://math.stackexchange.com/questions/656101/how-to-find-a-confidence-interval-for-a-maximum-likelihood-estimate .
Looks like you're ok on the first part, so I'll tackle your second and third points.
There are plenty of ways to fit smooth curves, with scipy.interpolate and splines, or with scipy.optimize.curve_fit. Personally, I prefer curve_fit, because you can supply your own function and let it fit the parameters for you.
Alternatively, if you don't want to learn a parametric function, you could do simple rolling-window smoothing with numpy.convolve.
As for code quality: you're not taking advantage of numpy's speed, because you're doing things in pure python. I would write your (existing) code like this:
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
# N is the true number of books.
# t is the number of weeks.
# unk is the true number of repeats found
t = 30
unk = 3
def numberrepeats(N, t, iters):
rand = np.random.randint(0, N, size=(t, iters))
return t - np.array([len(set(r)) for r in rand])
iters = 1000
ydata = np.empty(500-10)
for N in xrange(10,500):
sampledunk = np.count_nonzero(numberrepeats(N,t,iters) == unk)
ydata[N-10] = sampledunk/iters
print "MLE is", np.argmax(ydata)
xdata = range(10, 500)
print len(xdata), len(ydata)
plt.plot(xdata,ydata)
plt.show()
It's probably possible to optimize this even more, but this change brings your code's runtime from ~30 seconds to ~2 seconds on my machine.
The a simple (numerical) way to get a confidence interval is simply to run your script many times, and see how much your estimate varies. You can use that standard deviation to calculate the confidence interval.
In the interest of time, another option is to run a bunch of trials at each value of N (I used 2000), and then use random subsampling of those trials to get an estimate of the estimator standard deviation. Basically, this involves selecting a subset of the trials, generating your likelihood curve using that subset, then finding the maximum of that curve to get your estimator. You do this over many subsets and this gives you a bunch of estimators, which you can use to find a confidence interval on your estimator. My full script is as follows:
import numpy as np
t = 30
k = 3
def trial(N):
return t - len(np.unique(np.random.randint(0, N, size=t)))
def trials(N, n_trials):
return np.asarray([trial(N) for i in xrange(n_trials)])
n_trials = 2000
Ns = np.arange(1, 501)
results = np.asarray([trials(N, n_trials=n_trials) for N in Ns])
def likelihood(results):
L = (results == 3).mean(-1)
# boxcar filtering
n = 10
L = np.convolve(L, np.ones(n) / float(n), mode='same')
return L
def max_likelihood_estimate(Ns, results):
i = np.argmax(likelihood(results))
return Ns[i]
def max_likelihood(Ns, results):
# calculate mean from all trials
mean = max_likelihood_estimate(Ns, results)
# randomly subsample results to estimate std
n_samples = 100
sample_frac = 0.25
estimates = np.zeros(n_samples)
for i in xrange(n_samples):
mask = np.random.uniform(size=results.shape[1]) < sample_frac
estimates[i] = max_likelihood_estimate(Ns, results[:,mask])
std = estimates.std()
sterr = std * np.sqrt(sample_frac) # is this mathematically sound?
ci = (mean - 1.96*sterr, mean + 1.96*sterr)
return mean, std, sterr, ci
mean, std, sterr, ci = max_likelihood(Ns, results)
print "Max likelihood estimate: ", mean
print "Max likelihood 95% ci: ", ci
There are two drawbacks to this method. One is that, since you're taking many subsamples from the same set of trials, your estimates are not independent. To limit the effect of this, I only used 25% of the results for each subset. Another drawback is that each subsample is only a fraction of your data, so estimates derived from these subsets will have more variance than estimates derived from running the full script many times. To account for this, I computed the standard error as the standard deviation divided by the square root of 4, since I had four times as much data in my full data set than in one of the subsamples. However, I'm not familiar enough with Monte Carlo theory to know if this is mathematically sound. Running my script a number of times did seem to indicate that my results were reasonable.
Lastly, I did use a boxcar filter on the likelihood curves to smooth them out a bit. Ideally, this should improve results, but even with the filtering there was still a considerable amount of variability in the results. When calculating the value for the overall estimator, I wasn't sure if it would be better compute one likelihood curve from all the results and use the max of that (this is what I ended up doing), or to use the mean of all the subset estimators. Using the mean of the subset estimators might be able to help cancel out some of the roughness in the curves that remains after filtering, but I'm not sure on this.
Here is an answer to your first question and a pointer to a solution for the second:
plot(xdata,ydata)
# calculate the cumulative distribution function
cdf = np.cumsum(ydata)/sum(ydata)
# get the left and right boundary of the interval that contains 95% of the probability mass
right=argmax(cdf>0.975)
left=argmax(cdf>0.025)
# indicate confidence interval with vertical lines
vlines(xdata[left], 0, ydata[left])
vlines(xdata[right], 0, ydata[right])
# hatch confidence interval
fill_between(xdata[left:right], ydata[left:right], facecolor='blue', alpha=0.5)
This produces the following figure:
I'll try to answer question 3 when I have more time :)