Picking a random number from choices using gaussian probability weighting - python

I have an array: [1,1.2,1.4,1.5.....] with 1000 elements. I would like to randomly pick a value from these choices using a weighted gaussian probability with a given mean. For example, I have set mean value of 25. So the weight of choices is a gaussian function which has mean around 25, i.e the most of the numbers picked are around 25.
Duplicate of this question but using python instead of javascript.
Probability curve is something like this:
Background Info
I am trying to fit a curve on some data which has asymmetric error bars and I cannot find any python module to do such fitting. So I am doing a Monte-Carlo simulation where I randomly pick x and y data points from the error range with data values as mean and repeat it some (let's say) 1000 times and optimize the mean square error.
This is how my data looks like:

Couldn't you use take advantage of the numpy random sample method?
numpy.random.sample(array, probabilities)
Where your probabilities might be defined:
probabilities = [scipy.stats.norm(your_mean, your_stdev).pdf(i) for i in array]
Obviously not a ground up solution, but takes advantage of a few convenient libraries.

numpy's random.triangular module worked for me:
np.random.triangular(left, mode, right, size=None)
left = lowest value
right = highest value
mode = value with highest probability
size = size of samples to be picked

Just build a weight array that stores the weight for each number, then pass it to random.choices.
import random
def weight_func(x):
# Calculate the weight for x here.
pass
# List of choices
choices=[1,2,3,4,5,6]
# List of weights. Note that the weights need not sum to 1.
weights=[weight_func(x) for x in choices]
# Do a weighted sample (the 1000 here is the sample size and is arbitrary)
print(random.choices(choices, k=1000, weights=weights))

Related

Random numbers with user-defined continuous probability distribution

I would like to simulate something on the subject of photon-photon-interaction. In particular, there is Halpern scattering. Here is the German Wikipedia entry on it Halpern-Streuung. And there the differential cross section has an angular dependence of (3+(cos(theta))^2)^2.
I would like to have a generator of random numbers between 0 and 2*Pi, which corresponds to the density function ((3+(cos(theta))^2)^2)*(1/(99*Pi/4)). So the values around 0, Pi and 2*Pi should occur a little more often than the values around Pi/2 and 3.
I have already found that there is a function on how to randomly output discrete values with user-defined probability values numpy.random.choice(numpy.arange(1, 7), p=[0.1, 0.05, 0.05, 0.2, 0.4, 0.2]). I could work with that in an emergency, should there be nothing else. But actually I already want a continuous probability distribution here.
I know that even if there is such a Python command where you can enter a mathematical distribution function, it basically only produces discrete distributions of values, since no irrational numbers with 1s and 0s can be represented. But still, such a command would be more elegant with a continuous function.
Assuming the density function you have is proportional to a probability density function (PDF) you can use the rejection sampling method: Draw a number in a box until the box falls within the density function. It works for any bounded density function with a closed and bounded domain, as long as you know what the domain and bound are (the bound is the maximum value of f in the domain). In this case, the bound is 64/(99*math.pi) and the algorithm works as follows:
import math
import random
def sample():
mn=0 # Lowest value of domain
mx=2*math.pi # Highest value of domain
bound=64/(99*math.pi) # Upper bound of PDF value
while True: # Do the following until a value is returned
# Choose an X inside the desired sampling domain.
x=random.uniform(mn,mx)
# Choose a Y between 0 and the maximum PDF value.
y=random.uniform(0,bound)
# Calculate PDF
pdf=(((3+(math.cos(x))**2)**2)*(1/(99*math.pi/4)))
# Does (x,y) fall in the PDF?
if y<pdf:
# Yes, so return x
return x
# No, so loop
See also the section "Sampling from an Arbitrary Distribution" in my article on randomization.
The following shows the method's correctness by showing the probability that the returned sample is less than π/8. For correctness, the probability should be close to 0.0788:
print(sum(1 if sample()<math.pi/8 else 0 for _ in range(1000000))/1000000)
I had two suggestions in mind. The inverse transform sampling method and the "Deletion metode" (I'll just call it that). The inverse transform sampling method: There is an inverse function to my distribution. But I get problems in several places with the math. functions because of the domain. E.g. math.sqrt(-1). You would still have to trick around with if-queries here.That's why I decided to use Peter's suggestion.
And if you collect values in a loop and plot them in a histogram, it also looks quite good. Here with 40000 values and 100 bins
Here is the whole code for someone who is interested
import numpy as np
import math
import random
import matplotlib.pyplot as plt
N=40000
bins=100
def Deletion_method():
x=None
while x==None:
mn=0 # Lowest value of domain
mx=2*math.pi # Highest value of domain
bound=64/(99*math.pi) # Upper bound of PDF value
# Choose an X inside the desired sampling domain.
xrad=random.uniform(mn,mx)
# Choose a Y between 0 and the maximum PDF value.
y=random.uniform(0,bound)
# Calculate PDF
P=((3+(math.cos(xrad))**2)**2)*(1/(99*math.pi/4))
# Does (x,y) fall in the PDF?
if y<P:
x=xrad
return(x)
Values=[]
for k in range(0, N):
Values=np.append(Values, [Deletion_method()])
plt.hist(Values, bins)
plt.show()

Sampling from a set according to unnormalized log-probabilities in NumPy

I have a 1-D np.ndarray filled with unnormalized log-probabilities that define a categorical distribution. I would like to sample an integer index from this distribution. Since many of the probabilities are small, normalizing and exponentiating the log-probabilities introduces significant numerical error, therefore I cannot use np.random.choice. Effectively, I am looking for a NumPy equivalent to TensorFlow's tf.random.categorical, which works on unnormalized log-probabilities.
If there is not a function in NumPy that achieves this directly, what is an efficient manner to implement such sampling?
In general, there are many ways to choose an integer with a custom distribution, but most of them take weights that are proportional to the given probabilities. If the weights are log probabilities instead, then a slightly different approach is needed. Perhaps the simplest algorithm for this is rejection sampling, described below and implemented in Python. In the following algorithm, the maximum log-probability is max, and there are k integers to choose from.
Choose a uniform random integer i in [0, k).
Get the log-weight corresponding to i, then generate an exponential(1) random number, call it ex.
If max minus ex is less than the log-weight, return i. Otherwise, go to step 1.
The time complexity for rejection sampling is constant on average, especially if max is set to equal the true maximum weight. On the other hand, the expected number of iterations per sample depends greatly on the shape of the distribution. See also Keith Schwarz's discussion on the "Fair Die/Biased Coin Loaded Die" algorithm.
Now, Python code for this algorithm follows.
import random
import math
def categ(c):
# Do a weighted choice of an item with the
# given log-probabilities.
cm=max(c) # Find max log probability
while True:
# Choose an item at random
x=random.randint(0,len(c)-1)
# Choose it with probability proportional
# to exp(c[x])
y=cm-random.expovariate(1)
# Alternatively: y=math.log(random.random())+cm
if y<c[x]:
return x
The code above generates one variate at a time and uses only Python's base modules, rather than NumPy. Another answer shows how rejection sampling can be implemented in NumPy by blocks of random variates at a time (demonstrated on a different random sampling task, though).
The so-called "Gumbel max trick", used above all in machine learning, can be used to sample from a distribution with unnormalized log probabilities. This involves—
("Gumbel") adding a separate Gumbel random variate to each log probability, namely −ln(−ln(U)) where U is a random variate greater than 0 and less than 1, then
("max") choosing the item corresponding to the highest log probability.
However, the time complexity for this algorithm is linear in the number of items.
The following code illustrates the Gumbel max trick:
import random
import math
def categ(c):
# Do a weighted choice of an item with the
# given log-probabilities, using the Gumbel max trick
return max([[c[i]-math.log(-math.log(random.random())),i] \
for i in range(len(c))])[1]
# Or:
# return max([[c[i]-math.log(random.expovariate(1)),i] \
# for i in range(len(c))])[1]

Getting variance values for random samples generated from a standard normal distribution using numpy

I have a function that gives me probability distributions for each class, in terms of a matrix corresponding to mean values and another matrix corresponding to variance values. For example, if I had four classes then I would have the following outputs:
y_means = [1,2,3,4]
y_variance = [0.01,0.02,0.03,0.04]
I need to do the following calculation to the mean values to continue with the rest of my program:
y_means = np.array(y_means)
y_means = np.reshape(y_means,(y_means.size,1))
A = np.random.randn(10,y_means.size)
y_means = np.matmul(A,y_means)
Here, I have used the numpy.random.randn function to generate random samples from a standard normal distribution, and then multiply this with the matrix with the mean value to obtain a new output matrix. The dimension of the output matrix would then be of the size (10 x 1).
I need to do a similar calculation such that my output_variances will also be a (10 x 1) matrix. But it is not meaningful to multiply the variances in the same way with random samples from a standard normal distribution, because this would result in negative values as well. This is undesirable because my ultimate aim would be to create a normal distribution with these mean values and their corresponding variance values using:
torch.distributions.normal.Normal(loc=y_means, scale=y_variance)
So my question is if there is any method by which I get a variance value for each random sample generated by numpy.random.randn? Because then the multplication of such a matrix would make more sense with output_variance.
Or if there is any other strategy for this that I might be unaware of, please let me know.
The problem mentioned in the question required another matrix of the same dimension as A that corresponded to a variance measure for the random samples present in A.
Taking a row-wise or column-wise variance of the matrix denoted by A using numpy.var() didn't give a similar 10 x 4 matrix to multiply with y_variance.
I had solved the above problem by using the following approach:
First create a matrix with the same dimensions as A with zero entries, using the following line of code:
A_var = np.zeros_like(A)
then, using torch.distributions, create normal distributions with the values in A as the mean and zeroes as variance:
dist_A = torch.distributions.normal.Normal(loc=torch.Tensor(A), scale=torch.Tensor(A_var))
https://pytorch.org/docs/stable/distributions.html lists all the operations possible on Normal distributions in PyTorch. The sample() method can generate samples from a given distribution for any size. This property was exploited to first generate a sample matrix of size 10 X 10 x 4 and then calculating the variance along axis 0.
np.var(np.array(dist2.sample((10,))),axis=0)
This would result in a variance matrix of size 10 x 4, which can be used for calculations with y_variance.

How to change parameters of a scikit learn function dynamically i.e. find best parameter

I am trying to do dimensionality reduction using PCA function of sklearn, specifically
from sklearn.decomposition import PCA
def mypca(X,comp):
pca = PCA(n_components=comp)
pca.fit(X)
PCA(copy=True, n_components=comp, whiten=False)
Xpca = pca.fit_transform(X)
return Xpca
for n_comp in range(10,1000,20):
Xpca = mypca(X,n_comp) # X is a 2 dimensional array
print Xpca
I am calling mypca function from a loop with different values for comp. I am doing this in order to find the best value of comp for the problem I am trying to solve. But mypca function always returns the same value i.e. Xpca irrespective of value of comp.
The value it returns is correct for first value of comp I send from the loop i.e. Xpca value which it sends each time is correct for comp = 10 in my case.
What should I do in order to find best value of comp?
You use PCA to reduce the dimension.
From your code:
for n_comp in range(10,1000,20):
Xpca = mypca(X,n_comp) # X is a 2 dimensional array
print Xpca
Your input dataset X is only a 2 dimensional array, the minimum n_comp is 10, so the PCA try to find the 10 best dimension for you. Since 10 > 2, you will always get the same answer. :)
It looks like you're trying to pass different values for number of components, and re-fit with each. A great thing about PCA is that it's actually not necessary to do this. You can fit the full number of components (even as many components as dimensions in your dataset), then simply discard the components you don't want (i.e. those with small variance). This is equivalent to re-fitting the entire model with fewer components. Saves a lot of computation.
How to do it:
# x = input data, size(<points>, <dimensions>)
# fit the full model
max_components = x.shape[1] # as many components as input dimensions
pca = PCA(n_components=max_components)
pca.fit(x)
# transform the data (contains all components)
y_all = pca.transform(x)
# keep only the top k components (with greatest variance)
k = 2
y = y_all[:, 0:k]
In terms of how to select the number of components, it depends what you want to do. One standard way of choosing the number of components k is to look at the fraction of variance explained (R^2) by each choice of k. If your data is distributed near a low-dimensional linear subspace, then when you plot R^2 vs. k, the curve will have an 'elbow' shape. The elbow will be located at the dimensionality of the subspace. It's good practice to look at this curve because it helps understand the data. Even if there's no clean elbow, it's common to choose a threshold value for R^2, e.g. to preserve 95% of the variance.
Here's how to do it (this should be done on the model with max_components components):
# Calculate fraction of variance explained
# for each choice of number of components
r2 = pca.explained_variance_.cumsum() / x.var(0).sum()
Another way you might want to proceed is to take the PCA-transformed data and feed it to a downstream algorithm (e.g. classifier/regression), then select your number of components based on the performance (e.g. using cross validation).
Side note: Maybe just a formatting issue, but your code block in mypca() should be indented, or it won't be interpreted as part of the function.

How to compute the shannon entropy and mutual information of N variables

I need to compute the mutual information, and so the shannon entropy of N variables.
I wrote a code that compute shannon entropy of certain distribution.
Let's say that I have a variable x, array of numbers.
Following the definition of shannon entropy I need to compute the probability density function normalized, so using the numpy.histogram is easy to get it.
import scipy.integrate as scint
from numpy import*
from scipy import*
def shannon_entropy(a, bins):
p,binedg= histogram(a,bins,normed=True)
p=p/len(p)
x=binedg[:-1]
g=-p*log2(p)
g[isnan(g)]=0.
return scint.simps(g,x=x)
Choosing inserting x, and carefully the bin number this function works.
But this function is very dependent on the bin number: choosing different values of this parameter I got different values.
Particularly if my input is an array of values constant:
x=[0,0,0,....,0,0,0]
the entropy of this variables obviously has to be 0, but if I choose the bin number equal to 1 I got the right answer, if I choose different values I got strange non sense (negative) answers.. what I am feeling is that numpy.histogram have the arguments normed=True or density= True that (as said in the official documentation) they should give back the histogram normalized, and probably I do some error in the moment that I swich from the probability density function (output of numpy.histogram) to the probability mass function (input of shannon entropy), I do:
p,binedg= histogram(a,bins,normed=True)
p=p/len(p)
I would like to find a way to solve these problems, I would like to have an efficient method to compute the shannon entropy independent of the bin number.
I wrote a function to compute the shannon entropy of a distribution of more variables, but I got the same error.
The code is this, where the input of the function shannon_entropydd is the array where at each position there is each variable that has to be involved in the statistical computation
def intNd(c,axes):
assert len(c.shape) == len(axes)
assert all([c.shape[i] == axes[i].shape[0] for i in range(len(axes))])
if len(axes) == 1:
return scint.simps(c,axes[0])
else:
return intNd(scint.simps(c,axes[-1]),axes[:-1])
def shannon_entropydd(c,bins=30):
hist,ax=histogramdd(c,bins,normed=True)
for i in range(len(ax)):
ax[i]=ax[i][:-1]
p=-hist*log2(hist)
p[isnan(p)]=0
return intNd(p,ax)
I need these quantities in order to be able to compute the mutual information between certain set of variables:
M_info(x,y,z)= H(x)+H(z)+H(y)- H(x,y,z)
where H(x) is the shannon entropy of the variable x
I have to find a way to compute these quantities so if some one has a completely different kind of code that works I can switch on it, I don't need to repair this code but find a right way to compute this statistical functions!
The result will depend pretty strongly on the estimated density. Can you assume a specific form for the density? You can reduce the dependence of the result on the estimate if you avoid histograms or other general-purpose estimates such as kernel density estimates. If you can give more detail about the variables involved, I can make more specific comments.
I worked with estimates of mutual information as part of the work for my dissertation [1]. There is some stuff about MI in section 8.1 and appendix F.
[1] http://riso.sourceforge.net/docs/dodier-dissertation.pdf
I think that if you choose bins = 1, you will always find an entropy of 0, as there is no "uncertainty" over the possible bin the values are in ("uncertainty" is what entropy measures). You should choose an number of bins "big enough" to account for the diversity of the values that your variable can take. If you have discrete values: for binary values, you should take such that bins >= 2. If the values that can take your variable are in {0,1,2}, you should have bins >= 3, and so on...
I must say that I did not read your code, but this works for me:
import numpy as np
x = [0,1,1,1,0,0,0,1,1,0,1,1]
bins = 10
cx = np.histogram(x, bins)[0]
def entropy(c):
c_normalized = c/float(np.sum(c))
c_normalized = c_normalized[np.nonzero(c_normalized)]
h = -sum(c_normalized * np.log(c_normalized))
return h
hx = entropy(cx)

Categories

Resources