Converting gaussian to histogram - python

I'm running a model of particles, and I want to have initial conditions for the particle locations mimicking a gaussian distribution.
If I have N number of particles on 1D grid from -10 to 10, I want them to be distributed on the grid according to a gaussian with a known mean and standard deviation. It's basically creating a histogram where each bin width is 1 (the x-axis of locations resolution is 1), and the frequency of each bin should be how many particles are in it, which should all add up to N.
My strategy was to plot a gaussian function on the x-axis grid, and then just approximate the value of each point for the number of particles:
def gaussian(x, mu, sig):
return 1./(np.sqrt(2.*np.pi)*sig)*np.exp(-np.power((x - mu)/sig, 2.)/2)
mean = 0
sigma = 1
x_values = np.arange(-10, 10, 1)
y = gaussian(x_values, mean, sigma)
However, I have normalization issues (the sum doesn't add up to N), and the number of particles in each point should be an integer (I thought about converting the y array to integers but again, because of the normalization issue I get a flat line).
Usually, the problem is fitting a gaussian to histogram, but in my case, I need to do the reverse - and I couldn't find a solution for it yet. I will appreciate any help!
Thank you!!!

You can use numpy.random.normal to sample this distribution. You can get N points inside range (-10, 10) that follows Gaussian distribution with the following code.
import numpy as np
import matplotlib.pyplot as plt
N = 10000
mean = 5
sigma = 3
bin_edges = np.arange(-10, 11, 1)
x_values = (bin_edges[1:] + bin_edges[:-1]) / 2
points = np.random.normal(mean, sigma, N * 10)
mask = np.logical_and(points < 10, points > -10)
points = points[mask] # drop points outside range
points = points[:N] # only use the first N points
y, _ = np.histogram(points, bins=bin_edges)
plt.scatter(x_values, y)
plt.show()
The idea is to generate a lot of random numbers (10 N in the code), and ignores the points outside your desired range.

Related

Plot normal distribution over histogram

I am new to python and in the following code, I would like to plot a bell curve to show how the data follows a norm distribution. How would I go about it? Also, can anyone answer why when showing the hist, I have values (x-axis) greater than 100? I would assume by defining the Randels to 100, it would not show anything above it. If I am not mistaken, the x-axis represents what "floor" I am in and the y-axis represents how many observations matched that floor. By the way, this is a datacamp project.
"""
Let's say I roll a dice to determine if I go up or down a step in a building with
100 floors (1 step = 1 floor). If the dice is less than 2, I go down a step. If
the dice is less than or equal to 5, I go up a step, and if the dice is equal to 6,
I go up x steps based on a random integer generator between 1 and 6. What is the probability
I will be higher than floor 60?
"""
import numpy as np
import matplotlib.pyplot as plt
# Set the seed
np.random.seed(123)
# Simulate random walk
all_walks = []
for i in range(1000) :
random_walk = [0]
for x in range(100) :
step = random_walk[-1]
dice = np.random.randint(1,7)
if dice <= 2:
step = max(0, step - 1)
elif dice <= 5:
step = step + 1
else:
step = step + np.random.randint(1,7)
if np.random.rand() <= 0.001 : # There's a 0.1% chance I fall and have to start at 0
step = 0
random_walk.append(step)
all_walks.append(random_walk)
# Create and plot np_aw_t
np_aw_t = np.transpose(np.array(all_walks))
# Select last row from np_aw_t: ends
ends = np_aw_t[-1,:]
# Plot histogram of ends, display plot
plt.hist(ends,bins=10,edgecolor='k',alpha=0.65)
plt.style.use('fivethirtyeight')
plt.xlabel("Floor")
plt.ylabel("# of times in floor")
plt.show()
You can use scipy.stats.norm to get a normal distribution. Documentation for it here. To fit any function to a data set you can use scipy.optimize.curve_fit(), documentation for that here. My suggestion would be something like the following:
import scipy.stats as ss
import numpy as np
import scipy.optimize as opt
import matplotlib.pyplot as plt
#Making a figure with two y-axis (one for the hist, one for the pdf)
#An alternative would be to multiply the pdf by the sum of counts if you just want to show the fit.
fig, ax = plt.subplots(1,1)
twinx = ax.twinx()
rands = ss.norm.rvs(loc = 1, scale = 1, size = 1000)
#hist returns the bins and the value of each bin, plot to the y-axis ax
hist = ax.hist(rands)
vals, bins = hist[0], hist[1]
#calculating the center of each bin
bin_centers = [(bins[i] + bins[i+1])/2 for i in range(len(bins)-1)]
#finding the best fit coefficients, note vals/sum(vals) to get the probability in each bin instead of the count
coeff, cov = opt.curve_fit(ss.norm.pdf, bin_centers, vals/sum(vals), p0 = [0,1] )
#loc and scale are mean and standard deviation i believe
loc, scale = coeff
#x-values to plot the normal distribution curve
x = np.linspace(min(bins), max(bins), 100)
#Evaluating the pdf with the best fit mean and std
p = ss.norm.pdf(x, loc = loc, scale = scale)
#plot the pdf to the other axis and show
twinx.plot(x,p)
plt.show()
There are likely more elegant ways to do this, but if you are new to python and are going to use it for calculations and such, getting to know curve_fit and scipy.stats is recomended. I'm not sure I understand whan you mean by "defining the Randels", hist will plot a "standard" histogram with bins on the x-axis and the count in each bin on the y-axis. When using these counts to fit a pdf we can just divide all the counts by the total number of counts.
Hope that helps, just ask if anything is unclear :)
Edit: compact version
vals, bins,_ = ax.hist(my_histogram_data)
bin_centers = [(bins[i] + bins[i+1])/2 for i in range(len(bins)-1)]
coeff, cov = opt.curve_fit(ss.norm.pdf, bin_centers, vals/sum(vals), p0 = [0,1] )
x = np.linspace(min(bins), max(bins), 100)
p = ss.norm.pdf(x, loc = coeff[0], scale = coeff[1])
#p is now the fitted normal distribution

scipy normal distribution with scale greater and less than 1 [duplicate]

This question already has answers here:
Why does scipy.norm.pdf sometimes give PDF > 1? How to correct it?
(3 answers)
Closed 2 years ago.
I'm using the normal distribution from numpy and having a hard time understanding its documentation. Let's say I have a normal distribution with mean of 5 and standard deviation of 0.5:
import numpy as np
from matplotlib import pyplot as plt
from scipy.stats import norm
mean = 5
std = 0.25
x = np.linspace(mean - 3*std, mean + 3*std, 1000)
y = norm(loc=mean, scale=std).pdf(x)
plt.plot(x,y)
The resulting chart is the familiar bell curve but with its peak at around 1.6. How can the probability of any value exceed 1? If I multiply it by scale then the probabilities are correct.
No such problem when std (and scale) are greater than 1 however:
mean = 5
std = 10
x = np.linspace(mean - 3*std, mean + 3*std, 1000)
y = norm(loc=mean, scale=std).pdf(x)
plt.plot(x,y)
The documentation on norm says loc is the mean and scale is the standard deviation. Why does it behave so strangely with scale greater and less than 1?
Python 3.8.2. Scipy 1.4.1
The "bell curve" you are plotting is a probability density function (PDF). This means that the probability for a random variable with that distribution falling in any interval [a, b] is the area under the curve between a and b. Thus the whole area under the curve (from -infinity to +infinity) must be 1. So when the standard deviation is small, the maximum of the PDF may well be greater than 1, there is nothing strange about that.
Follow-up question: Is the area under the curve in the first plot really 1?
Yes, it is. One way to confirm this is to approximate the area under the curve by calculating the total area of a series of rectangles whose heights are defined by the curve:
import numpy as np
from matplotlib import pyplot as plt
from scipy.stats import norm
import matplotlib.patches as patches
mean = 5
std = 0.25
x = np.linspace(4, 6, 1000)
y = norm(loc=mean, scale=std).pdf(x)
fig, ax = plt.subplots()
ax.plot(x, y)
ax.set_aspect('equal')
ax.set_xlim([4, 6])
ax.set_ylim([0, 1.7])
# Approximate area under the curve by summing over rectangles:
xlim_approx = [4, 6] # locations of left- and rightmost rectangle
n_approx = 17 # number of rectangles
# width of one rectangle:
width_approx = (xlim_approx[1] - xlim_approx[0]) / n_approx
# x-locations of rectangles:
x_approx = np.linspace(xlim_approx[0], xlim_approx[1], n_approx)
# heights of rectangles:
y_approx = norm(loc=mean, scale=std).pdf(x_approx)
# plot approximation rectangles:
for i, xi in enumerate(x_approx):
ax.add_patch(patches.Rectangle((xi - width_approx/2, 0), width_approx,
y_approx[i], facecolor='gray', alpha=.3))
# areas of the rectangles:
areas = y_approx * width_approx
# total area of the rectangles:
print(sum(areas))
0.9411599204607589
Okay, that's not quite 1, but let's get a better approximation by extending the x-limits and inreasing the number of rectangles:
xlim_approx = [0, 10]
n_approx = 100_000
width_approx = (xlim_approx[1] - xlim_approx[0]) / n_approx
x_approx = np.linspace(xlim_approx[0], xlim_approx[1], n_approx)
y_approx = norm(loc=mean, scale=std).pdf(x_approx)
areas = y_approx * width_approx
print(sum(areas))
0.9999899999999875

How to generate a random sample of points from a 3-D ellipsoid using Python?

I am trying to sample around 1000 points from a 3-D ellipsoid, uniformly. Is there some way to code it such that we can get points starting from the equation of the ellipsoid?
I want points on the surface of the ellipsoid.
Theory
Using this excellent answer to the MSE question How to generate points uniformly distributed on the surface of an ellipsoid? we can
generate a point uniformly on the sphere, apply the mapping f :
(x,y,z) -> (x'=ax,y'=by,z'=cz) and then correct the distortion
created by the map by discarding the point randomly with some
probability p(x,y,z).
Assuming that the 3 axes of the ellipsoid are named such that
0 < a < b < c
We discard a generated point with
p(x,y,z) = 1 - mu(x,y,y)/mu_max
probability, ie we keep it with mu(x,y,y)/mu_max probability where
mu(x,y,z) = ((acy)^2 + (abz)^2 + (bcx)^2)^0.5
and
mu_max = bc
Implementation
import numpy as np
np.random.seed(42)
# Function to generate a random point on a uniform sphere
# (relying on https://stackoverflow.com/a/33977530/8565438)
def randompoint(ndim=3):
vec = np.random.randn(ndim,1)
vec /= np.linalg.norm(vec, axis=0)
return vec
# Give the length of each axis (example values):
a, b, c = 1, 2, 4
# Function to scale up generated points using the function `f` mentioned above:
f = lambda x,y,z : np.multiply(np.array([a,b,c]),np.array([x,y,z]))
# Keep the point with probability `mu(x,y,z)/mu_max`, ie
def keep(x, y, z, a=a, b=b, c=c):
mu_xyz = ((a * c * y) ** 2 + (a * b * z) ** 2 + (b * c * x) ** 2) ** 0.5
return mu_xyz / (b * c) > np.random.uniform(low=0.0, high=1.0)
# Generate points until we have, let's say, 1000 points:
n = 1000
points = []
while len(points) < n:
[x], [y], [z] = randompoint()
if keep(x, y, z):
points.append(f(x, y, z))
Checks
Check if all points generated satisfy the ellipsoid condition (ie that x^2/a^2 + y^2/b^2 + z^2/c^2 = 1):
for p in points:
pscaled = np.multiply(p,np.array([1/a,1/b,1/c]))
assert np.allclose(np.sum(np.dot(pscaled,pscaled)),1)
Runs without raising any errors. Visualize the points:
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(projection="3d")
points = np.array(points)
ax.scatter(points[:, 0], points[:, 1], points[:, 2])
# set aspect ratio for the axes using https://stackoverflow.com/a/64453375/8565438
ax.set_box_aspect((np.ptp(points[:, 0]), np.ptp(points[:, 1]), np.ptp(points[:, 2])))
plt.show()
These points seem evenly distributed.
Problem with currently accepted answer
Generating a point on a sphere and then just reprojecting it without any further corrections to an ellipse will result in a distorted distribution. This is essentially the same as setting this posts's p(x,y,z) to 0. Imagine an ellipsoid where one axis is orders of magnitude bigger than another. This way, it is easy to see, that naive reprojection is not going to work.
Consider using Monte-Carlo simulation: generate a random 3D point; check if the point is inside the ellipsoid; if it is, keep it. Repeat until you get 1,000 points.
P.S. Since the OP changed their question, this answer is no longer valid.
J.F. Williamson, "Random selection of points distributed on curved surfaces", Physics in Medicine & Biology 32(10), 1987, describes a general method of choosing a uniformly random point on a parametric surface. It is an acceptance/rejection method that accepts or rejects each candidate point depending on its stretch factor (norm-of-gradient). To use this method for a parametric surface, several things have to be known about the surface, namely—
x(u, v), y(u, v) and z(u, v), which are functions that generate 3-dimensional coordinates from two dimensional coordinates u and v,
The ranges of u and v,
g(point), the norm of the gradient ("stretch factor") at each point on the surface, and
gmax, the maximum value of g for the entire surface.
The algorithm is then:
Generate a point on the surface, xyz.
If g(xyz) >= RNDU01()*gmax, where RNDU01() is a uniform random variate in [0, 1), accept the point. Otherwise, repeat this process.
Chen and Glotzer (2007) apply the method to the surface of a prolate spheroid (one form of ellipsoid) in "Simulation studies of a phenomenological model for elongated virus capsid formation", Physical Review E 75(5), 051504 (preprint).
Here is a generic function to pick a random point on a surface of a sphere, spheroid or any triaxial ellipsoid with a, b and c parameters. Note that generating angles directly will not provide uniform distribution and will cause excessive population of points along z direction. Instead, phi is obtained as an inverse of randomly generated cos(phi).
import numpy as np
def random_point_ellipsoid(a,b,c):
u = np.random.rand()
v = np.random.rand()
theta = u * 2.0 * np.pi
phi = np.arccos(2.0 * v - 1.0)
sinTheta = np.sin(theta);
cosTheta = np.cos(theta);
sinPhi = np.sin(phi);
cosPhi = np.cos(phi);
rx = a * sinPhi * cosTheta;
ry = b * sinPhi * sinTheta;
rz = c * cosPhi;
return rx, ry, rz
This function is adopted from this post: https://karthikkaranth.me/blog/generating-random-points-in-a-sphere/
One way of doing this whch generalises for any shape or surface is to convert the surface to a voxel representation at arbitrarily high resolution (the higher the resolution the better but also the slower). Then you can easily select the voxels randomly however you want, and then you can select a point on the surface within the voxel using the parametric equation. The voxel selection should be completely unbiased, and the selection of the point within the voxel will suffer the same biases that come from using the parametric equation but if there are enough voxels then the size of these biases will be very small.
You need a high quality cube intersection code but with something like an elipsoid that can optimised quite easily. I'd suggest stepping through the bounding box subdivided into voxels. A quick distance check will eliminate most cubes and you can do a proper intersection check for the ones where an intersection is possible. For the point within the cube I'd be tempted to do something simple like a random XYZ distance from the centre and then cast a ray from the centre of the elipsoid and the selected point is where the ray intersects the surface. As I said above, it will be biased but with small voxels, the bias will probably be small enough.
There are libraries that do convex shape intersection very efficiently and cube/elipsoid will be one of the options. They will be highly optimised but I think the distance culling would probably be worth doing by hand whatever. And you will need a library that differentiates between a surface intersection and one object being totally inside the other.
And if you know your elipsoid is aligned to an axis then you can do the voxel/edge intersection very easily as a stack of 2D square intersection elipse problems with the set of squares to be tested defined as those that are adjacent to those in the layer above. That might be quicker.
One of the things that makes this approach more managable is that you do not need to write all the code for edge cases (it is a lot of work to get around issues with floating point inaccuracies that can lead to missing or doubled voxels at the intersection). That's because these will be very rare so they won't affect your sampling.
It might even be quicker to simply find all the voxels inside the elipse and then throw away all the voxels with 6 neighbours... Lots of options. It all depends how important performance is. This will be much slower than the opther suggestions but if you want ~1000 points then ~100,000 voxels feels about the minimum for the surface, so you probably need ~1,000,000 voxels in your bounding box. However even testing 1,000,000 intersections is pretty fast on modern computers.
Depending on what "uniformly" refers to, different methods are applicable. In any case, we can use the parametric equations using spherical coordinates (from Wikipedia):
where s = 1 refers to the ellipsoid given by the semi-axes a > b > c. From these equations we can derive the relevant volume/area element and generate points such that their probability of being generated is proportional to that volume/area element. This will provide constant volume/area density across the surface of the ellipsoid.
1. Constant volume density
This method generates points on the surface of an ellipsoid such that their volume density across the surface of the ellipsoid is constant. A consequence of this is that the one-dimensional projections (i.e. the x, y, z coordinates) are uniformly distributed; for details see the plot below.
The volume element for a triaxial ellipsoid is given by (see here):
and is thus proportional to sin(theta) (for 0 <= theta <= pi). We can use this as the basis for a probability distribution that indicates "how many" points should be generated for a given value of theta: where the area density is low/high, the probability for generating a corresponding value of theta should be low/high, too.
Hence, we can use the function f(theta) = sin(theta)/2 as our probability distribution on the interval [0, pi]. The corresponding cumulative distribution function is F(theta) = (1 - cos(theta))/2. Now we can use Inverse transform sampling to generate values of theta according to f(theta) from a uniform random distribution. The values of phi can be obtained directly from a uniform distribution on [0, 2*pi].
Example code:
import matplotlib.pyplot as plt
import numpy as np
from numpy import sin, cos, pi
rng = np.random.default_rng(seed=0)
a, b, c = 10, 3, 1
N = 5000
phi = rng.uniform(0, 2*pi, size=N)
theta = np.arccos(1 - 2*rng.random(size=N))
x = a*sin(theta)*cos(phi)
y = b*sin(theta)*sin(phi)
z = c*cos(theta)
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
ax.scatter(x, y, z, s=2)
plt.show()
which produces the following plot:
The following plot shows the one-dimensional projections (i.e. density plots of x, y, z):
import seaborn as sns
sns.kdeplot(data=dict(x=x, y=y, z=z))
plt.show()
2. Constant area density
This method generates points on the surface of an ellipsoid such that their area density is constant across the surface of the ellipsoid.
Again, we start by calculating the corresponding area element. For simplicity we can use SymPy:
from sympy import cos, sin, symbols, Matrix
a, b, c, t, p = symbols('a b c t p')
x = a*sin(t)*cos(p)
y = b*sin(t)*sin(p)
z = c*cos(t)
J = Matrix([
[x.diff(t), x.diff(p)],
[y.diff(t), y.diff(p)],
[z.diff(t), z.diff(p)],
])
print((J.T # J).det().simplify())
This yields
-a**2*b**2*sin(t)**4 + a**2*b**2*sin(t)**2 + a**2*c**2*sin(p)**2*sin(t)**4 - b**2*c**2*sin(p)**2*sin(t)**4 + b**2*c**2*sin(t)**4
and further simplifies to (dividing by (a*b)**2 and taking the sqrt):
sin(t)*np.sqrt(1 + ((c/b)**2*sin(p)**2 + (c/a)**2*cos(p)**2 - 1)*sin(t)**2)
Since for this case the area element is more complex, we can use rejection sampling:
import matplotlib.pyplot as plt
import numpy as np
from numpy import cos, sin
def f_redo(t, p):
return (
sin(t)*np.sqrt(1 + ((c/b)**2*sin(p)**2 + (c/a)**2*cos(p)**2 - 1)*sin(t)**2)
< rng.random(size=t.size)
)
rng = np.random.default_rng(seed=0)
N = 5000
a, b, c = 10, 3, 1
t = rng.uniform(0, np.pi, size=N)
p = rng.uniform(0, 2*np.pi, size=N)
redo = f_redo(t, p)
while redo.any():
t[redo] = rng.uniform(0, np.pi, size=redo.sum())
p[redo] = rng.uniform(0, 2*np.pi, size=redo.sum())
redo[redo] = f_redo(t[redo], p[redo])
x = a*np.sin(t)*np.cos(p)
y = b*np.sin(t)*np.sin(p)
z = c*np.cos(t)
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
ax.scatter(x, y, z, s=2)
plt.show()
which yields the following distribution:
The following plot shows the corresponding one-dimensional projections (x, y, z):

Beginner Python Monte Carlo Simulation

I'm a beginner at Python and am working through exercises set by our instructor. I am struggling with this question.
In the Python editor, write a Monte Carlo simulation to estimate the value of the number π.
Specifically, follow these steps:
A. Produce two arrays, one called x, one called y, which contain 100 elements each,
which are randomly and uniformly distributed real numbers between -1 and 1.
B. Plot y versus x as dots in a plot. Label your axes accordingly.
C. Write down a mathematical expression that defines which (x, y) pairs of data points
are located in a circle with radius 1, centred on the (0, 0) origin of the graph.
D. Use Boolean masks to identify the points inside the circle, and overplot them in a
different colour and marker size on top of the data points you already plotted in B.
This is what I have at the moment.
import numpy as np
import math
import matplotlib.pyplot as plt
np.random.seed(12345)
x = np.random.uniform(-1,1,100)
y = np.random.uniform(-1,1,100)
plt.plot(x,y) //this works
for i in x:
newarray = (1>math.sqrt(y[i]*y[i] + x[i]*x[i]))
plt.plot(newarray)
Any suggestions?
as pointed out in the comment the error in your code is for i in x should be for i in xrange(len(x))
If you want to actually use a Boolean mask as said in the statement you could do something like this
import pandas as pd
allpoints = pd.DataFrame({'x':x, 'y':y})
# this is your boolean mask
mask = pow(allpoints.x, 2) + pow(allpoints.y, 2) < 1
circlepoints = allpoints[mask]
plt.scatter(allpoints.x, allpoints.y)
plt.scatter(circlepoints.x, circlepoints.y)
increasing the number of point to 10000 you would get something like this
to estimate PI you can use the famous montecarlo derivation
>>> n = 10000
>>> ( len(circlepoints) * 4 ) / float(n)
<<< 3.1464
You are close to the solution. I slightly reshape your MCVE:
import numpy as np
import math
import matplotlib.pyplot as plt
np.random.seed(12345)
N = 10000
x = np.random.uniform(-1, 1, N)
y = np.random.uniform(-1, 1, N)
Now, we compute a criterion that makes sense in this context, such as the distance of points to the origin:
d = x**2 + y**2
Then we use Boolean Indexing to discriminate between points within and outside the Unit Circle:
q = (d <= 1)
At this point lies the Monte Carlo Hypothesis. We assume the ratio of uniformly distributed points in the Circle and in the plane U(-1,1)xU(-1,1) is representative for the Area of the Unit Circle and the Square. Then we can statistically assess pi = 4*(Ac/As) from the ratio of points within the Circle/Square. This leads to:
pi = 4*q.sum()/q.size # 3.1464
Finally we plot the result:
fig, axe = plt.subplots()
axe.plot(x[q], y[q], '.', color='green', label=r'$d \leq 1$')
axe.plot(x[~q], y[~q], '.', color='red', label=r'$d > 1$')
axe.set_aspect('equal')
axe.set_title(r'Monte Carlo: $\pi$ Estimation')
axe.set_xlabel('$x$')
axe.set_ylabel('$y$')
axe.legend(bbox_to_anchor=(1, 1), loc='upper left')
fig.savefig('MonteCarlo.png', dpi=120)
It outputs:

How can I set a minimum distance constraint for generating points with numpy.random.rand?

I am trying to generate an efficient code for generating a number of random position vectors which I then use to calculate a pair correlation function. I am wondering if there is straightforward way to set a constraint on the minimum distance allowed between any two points placed in my box.
My code currently is as follows:
def pointRun(number, dr):
"""
Compute the 3D pair correlation function
for a random distribution of 'number' particles
placed into a 1.0x1.0x1.0 box.
"""
## Create array of distances over which to calculate.
r = np.arange(0., 1.0+dr, dr)
## Generate list of arrays to define the positions of all points,
## and calculate number density.
a = np.random.rand(number, 3)
numberDensity = len(a)/1.0**3
## Find reference points within desired region to avoid edge effects.
b = [s for s in a if all(s > 0.4) and all(s < 0.6) ]
## Compute pairwise correlation for each reference particle
dist = scipy.spatial.distance.cdist(a, b, 'euclidean')
allDists = dist[(dist < np.sqrt(3))]
## Create histogram to generate radial distribution function, (RDF) or R(r)
Rr, bins = np.histogram(allDists, bins=r, density=False)
## Make empty containers to hold radii and pair density values.
radii = []
rhor = []
## Normalize RDF values by distance and shell volume to get pair density.
for i in range(len(Rr)):
y = (r[i] + r[i+1])/2.
radii.append(y)
x = np.average(Rr[i])/(4./3.*np.pi*(r[i+1]**3 - r[i]**3))
rhor.append(x)
## Generate normalized pair density function, by total number density
gr = np.divide(rhor, numberDensity)
return radii, gr
I have previously tried using a loop that calculated all distances for each point as it was made and then accepted or rejected. This method was very slow if I use a lot of points.
Here is a scalable O(n) solution using numpy. It works by initially specifying an equidistant grid of points and then perturbing the points by some amount keeping the distance between the points at most min_dist.
You'll want to tweak the number of points, box shape and perturbation sensitivity to get the min_dist you want.
Note: If you fix the size of a box and specify a minimum distance between every point, it makes sense that there will be a limit to the number of points you can draw satisfying the minimum distance.
import numpy as np
import matplotlib.pyplot as plt
# specify params
n = 500
shape = np.array([64, 64])
sensitivity = 0.8 # 0 means no movement, 1 means max distance is init_dist
# compute grid shape based on number of points
width_ratio = shape[1] / shape[0]
num_y = np.int32(np.sqrt(n / width_ratio)) + 1
num_x = np.int32(n / num_y) + 1
# create regularly spaced neurons
x = np.linspace(0., shape[1]-1, num_x, dtype=np.float32)
y = np.linspace(0., shape[0]-1, num_y, dtype=np.float32)
coords = np.stack(np.meshgrid(x, y), -1).reshape(-1,2)
# compute spacing
init_dist = np.min((x[1]-x[0], y[1]-y[0]))
min_dist = init_dist * (1 - sensitivity)
assert init_dist >= min_dist
print(min_dist)
# perturb points
max_movement = (init_dist - min_dist)/2
noise = np.random.uniform(
low=-max_movement,
high=max_movement,
size=(len(coords), 2))
coords += noise
# plot
plt.figure(figsize=(10*width_ratio,10))
plt.scatter(coords[:,0], coords[:,1], s=3)
plt.show()
Based on #Samir 's answer, and make it a callable function for your convenience :)
import numpy as np
import matplotlib.pyplot as plt
def generate_points_with_min_distance(n, shape, min_dist):
# compute grid shape based on number of points
width_ratio = shape[1] / shape[0]
num_y = np.int32(np.sqrt(n / width_ratio)) + 1
num_x = np.int32(n / num_y) + 1
# create regularly spaced neurons
x = np.linspace(0., shape[1]-1, num_x, dtype=np.float32)
y = np.linspace(0., shape[0]-1, num_y, dtype=np.float32)
coords = np.stack(np.meshgrid(x, y), -1).reshape(-1,2)
# compute spacing
init_dist = np.min((x[1]-x[0], y[1]-y[0]))
# perturb points
max_movement = (init_dist - min_dist)/2
noise = np.random.uniform(low=-max_movement,
high=max_movement,
size=(len(coords), 2))
coords += noise
return coords
coords = generate_points_with_min_distance(n=8, shape=(2448,2448), min_dist=256)
# plot
plt.figure(figsize=(10,10))
plt.scatter(coords[:,0], coords[:,1], s=3)
plt.show()
As I understood, you're looking for an algorithm to create many random points in a box such that no two points are closer than some minimum distance. If this is your problem, then you can take advantage of statistical physics, and solve it using molecular dynamics software. Moreover, you do need molecular dynamics or Monte Carlo to obtain exact solution of this problem.
You place N atoms in a rectangular box, create a repulsive interaction of a fixed radius between them (such as shifted Lennard-Jones interaction), and run simulation for some time (untill you see that the points spread out uniformly throughout the box). By laws of statistical physics you can show that positions of the points would be maximally random given the constraint that points cannot be close than some distance. This would not be true if you use iterative algorithm, such as placing points one-by-one and rejecting them if they overlap
I would estimate a runtime of several seconds for 10000 points, and several minutes for 100k. I use OpenMM for all my moelcular dynamics simulations.
#example of generating 50 points in a square of 4000x4000 and with minimum distance of 400
import numpy as np
import random as rnd
n_points=50
x,y = np.zeros(n_points),np.zeros(n_points)
x[0],y[0]=np.round(rnd.uniform(0,4000)),np.round(rnd.uniform(0,4000))
min_distances=[]
i=1
while i<n_points :
x_temp,y_temp=np.round(rnd.uniform(0,4000)),np.round(rnd.uniform(0,4000))
distances = []
for j in range(0,i):
distances.append(np.sqrt((x_temp-x[j])**2+(y_temp-y[j])**2))
min_distance = np.min(distances)
if min_distance>400 :
min_distances.append(min_distance)
x[i]=x_temp
y[i]=y_temp
i = i+1
print(x,y)

Categories

Resources