Converting matplotlib's streamplot coordiantes to numpy coordiantes - python

I'm currently working with matplotlib in order to create a module of a specific vector field using matplotlib.pyplot.streamplot
after the creation and coloring of the lines in the streamplot, i'm trying to color the whitespace around the divergence points around the plot, im trying to achieve a gradient of color that is dependent on the distance of the white pixels around it.
The streamplot in question is built according to:
xs=np.linspace(-10,10,2000)
ys=np.linspace(-10,10,2000)
Therefore, if the divergence is located (for demonstration purposes) at (0,0) it will be located exactly in the middle of the plot.
Now, the only method i can think of for coloring according to distance from it, is kind of clunky since it requires me to:
add a matplotlib.patches.Rectangle on top of the divergence point in a specific color that is not in the image yet.
convert the plot, with the streamlines and rectangles (one rectangle for each divergence point in streamplot) to a np.array
find the new coordinates of the colors of the rectangles (they represent the location of the divergence point in the new np.array created from streamplot).
calculate the pixels like i want from the colored pixels.
This whole method feels way to clunky and over-complicating, and obviously slower than i could do. im sure theres a way to convert the coordinates from the matplotlib plot to the ones in np.array somehow or perhaps handle the coloring in matplotlib which will be even easier.
sadly i couldn't find a solution that answers this specific need yet.
thanks in advance for any help given!
EDIT
I'm adding an example (not my code, but a representation of what I wish to achieve).
I want to clarify that the solution of adding a patches.circle on top of a circle patch is not my go to, since i'm looking to keep my painting options more dynamic.

If you can define the color intensity you want as a 2-dimensional function, you can plot that function with plt.imshow() and then put the streamplot on top of it. You just need to transform the coordinates linearly to match the image coordinates.
Here is an example:
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [10, 10]
# plot 2d function
grid = np.arange(-1, 1, 0.001)
x, y = np.meshgrid(grid, grid)
z = 1 - (x ** 2 + y ** 2) ** 0.5
plt.imshow(z, cmap='Blues')
# streamplot example from matplotlib docs (modified)
w = 3
Y, X = np.mgrid[-w:w:100j, -w:w:100j]
U = Y ** 2
V = X ** 2
# transform according to previous plot
n = len(grid) / 2
scale = n / w
X = (X + w) * scale
Y = (Y + w) * scale
U = (U + w) * scale
V = (V + w) * scale
plt.xticks(ticks=[0, n, 2*n],
labels=[-w, 0, w])
plt.yticks(ticks=[0, n, 2*n],
labels=[-w, 0, w])
plt.streamplot(X, Y, U, V);

Related

Plotting Vectors 2D (Two-Dimensional): How to plot a vector field in Spherical Coordinates in Python

How to plot a vector field in Spherical Coordinates in Python
like this equation:
Is it possible to plot a graph (2D or 3D) of vector field
in spherical coordinates using matplotlib?
Need some help.
You may change parameters in the following script to explore vector field
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Parameters
pts = 100
x_range = 10
y_range = 10
colors_in_quiver = 20
headwidth=5.0
minlength=3
pivot='tail'
# Initialization
x,y = np.linspace(-x_range,x_range,pts), np.linspace(-y_range,y_range, pts)
x,y = np.meshgrid(x,y)
r = np.sqrt(x**2+y**2)
theta = np.arctan(y/x)
# Components of H
r_comp = 2*np.cos(theta)/r**3
theta_comp = np.sin(theta)/r**3
# Plot
fig, ax = plt.subplots(figsize=(10,9))
ax.quiver(x, y, r_comp, theta_comp,
[pd.qcut(r_comp.flatten(), q=colors_in_quiver, labels=False)],
headwidth=headwidth,
minlength=minlength,
pivot=pivot,
cmap='hsv')
---EDIT----
Assumption: since there is no z component. I assume it is free to move. Hence, in the code below, both z and phi moves freely.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Parameters
pts = 20
x_range = 6
y_range = 8
z_range = 4
def comps(levels):
# Initialization
x,y,z = np.linspace(-x_range,x_range,pts), np.linspace(-y_range,y_range, pts), np.linspace(-z_range,z_range, 5)[levels]
x,y,z = np.meshgrid(x,y,z)
r = np.sqrt(x**2+y**2+z**2)
theta = np.arctan((y+1e-10)/(x+1e-10))
phi = np.arccos((z+1e-10)/(r+1e-10))
# Components of H
r_comp = 2*np.cos(theta)/r**3
theta_comp = np.sin(theta)/r**3
phi_comp = phi
return x,y,z,r_comp, theta_comp, phi_comp
# Plot
ax = plt.figure(figsize=(10,10)).add_subplot(projection='3d')
for i in range(4):
x,y,z,r_comp, theta_comp, phi_comp = comps(i)
ax.quiver(x, y,z, r_comp, theta_comp,phi_comp,
color=['red', 'green', 'blue', 'yellow'][i],cmap='hsv')
As you can see above, it looks like there are layers stacked in the pile. This shape is determined by the third component. Since, it is missing in this case, I have sampled it in layered style. You will get differently shaped 3D plots dependent upon the third component (which you are free to change).
NOTE: Consider the case of gravitational field, it has only one component G(x, y ,z) = -g*e_z . So, if you plot a 3D graph without any assumption then, you will get a cuboid with all parallel vectors pointing downwards. However, since we know that earth is sphere, we are no longer free to choose x,y,z and so if we include the information of the manifold(not in physics/math language but plain language) in the code to plot G(x, y ,z), we would get a sphere with vectors on it surface pointing towards it's center.
Since you posted this on Physics.SE, I assume your equation is using the physics convention for spherical coordinates, with theta=0 at the north pole.
You can easily plot 3D vector fields using SageMath, which is a huge mathematics / algebra system built on top of Python. If you can write Python, it's easy to start working with SageMath. You don't even need to install it: you can run Sage scripts in your Web browser, on the SageMathCell server.
Sage has sophisticated support for vector fields, with a variety of coordinate systems built-in. It can do Cartesian plots working in spherical coordinates, but it's allegedly rather slow, and I haven't experimented with it.
The code below creates an interactive 3D plot of your vector field, doing the coordinate transformations explicitly. Sage lets us define symbolic variables, which can be used to create equations and do algebra and calculus. Sage syntax is almost identical to plain Python, although as a mathematical convenience it permits the use of ^ as well as ** for exponentiation (Sage uses ^^ for exclusive-or).
The Sage plot_vector_field3d function works with Cartesian coordinates. It expects its function argument to be a list, tuple, or vector of functions that each take x, y, z args. So we need to transform those args to polar form, call the H function with the polar args, then transform the polar value returned by H to Cartesian. By working with symbolic variables and functions, we can transform the H function itself, as this code illustrates.
# Create some symbolic variables
x, y, z = var('x,y,z')
rad, th, phi = var('rad,th,ph')
# Transform polar coords to Cartesian
def sphere_xyz(R, theta, phi):
r = R * sin(theta)
return r * cos(phi), r * sin(phi), R * cos(theta)
# Transform Cartesian coords to polar
def xyz_sphere(X, Y, Z):
r2 = X^2 + Y^2
r = sqrt(r2)
R = sqrt(r2 + Z^2)
return R, atan2(r, Z), atan2(Y, X)
# The equation in polar coords
H_polar(rad, th, ph) = (2 * cos(th) / rad^3, sin(th) / rad^3, 0)
# Transform equation to Cartesian
R, theta, phi = xyz_sphere(x, y, z)
H_xyz(x, y, z) = sphere_xyz(*H_polar(rad=R, th=theta, ph=phi))
print(H_xyz)
lo, hi = 3, 6
P = plot_vector_field3d(H_xyz,
(x, lo, hi), (y, lo, hi), (z, lo, hi),
colors='rainbow', plot_points=[6, 6, 6])
perspective = False
projection = "perspective" if perspective else "orthographic"
P.show(frame=True, projection=projection)
The script prints this for the transformed version of the H equation.
(x, y, z) |--> (2*z*sin(sqrt(x^2 + y^2)/(x^2 + y^2 + z^2)^2)/(x^2 + y^2 + z^2)^2, 0, 2*z*cos(sqrt(x^2 + y^2)/(x^2 + y^2 + z^2)^2)/(x^2 + y^2 + z^2)^2)
Here's a screenshot of the plot, using an orthographic projection.
It's a lot easier to see the structure in the interactive 3D view.
You can drag the view with the mouse & zoom with the mouse scroll wheel, the shift & ctrl keys can also be used. On touchscreens, use one finger to rotate, two fingers to pan & zoom.

Mapping the color scale of 3D isosurface on a scalar field

Let's say we have some 3D complex valued function f(x,y,z). Using Plotly, I'm trying to plot isosurfaces of the magnitude |f(x,y,z)| of such function. So far, everything is OK and my code seems to do well, please find below a working example on atomic orbitals functions :
import chart_studio.plotly as py
import plotly.graph_objs as go
import scipy.special as scispe
import numpy as np
import math
a=5.29e-11 # Bohr radius (m)
def orbital(n,l,m,r,theta,phi): # Complex function I want to plot
L=scispe.genlaguerre(n-l-1,2*l+1) # Laguerre polynomial
radial= (2/(n*a))**(3/2) * np.sqrt(math.factorial(n-l-1)/(2*n*math.factorial(n+l))) * np.exp(-2*r/n) * (2*r/n)**l * L(2*r/n)
wavefunction = radial * scispe.sph_harm(m,l, phi, theta)
return wavefunction
#Quantum numbers
n=2
l=1
m=0
goodspan = (3 * n**2 - l * (l+1))/2 #Plot span adpated to the mean electron position
x, y, z = np.mgrid[-goodspan:goodspan:40j, -goodspan:goodspan:40j, -goodspan:goodspan:40j] #in units of a
r = np.sqrt(x**2 + y**2 + z**2) #Function has to be evaluated in spherical coordinates
theta = np.arccos(z/r)
phi = np.arctan(y/x)
AO=orbital(n,l,m,r,theta,phi)
magnitude = abs(AO) # Compute the magnitude of the function
phase = np.angle(AO) # Compute the phase of the function
isoprob = np.amax(magnitude)/2 # Set value the isosurface
fig = go.Figure(data=go.Isosurface(
x=x.flatten(),
y=y.flatten(),
z=z.flatten(),
value=magnitude.flatten(),
opacity=0.5,
isomin=isoprob,
isomax=isoprob,
surface_count=1,
caps=dict(x_show=True, y_show=True)
))
fig.show()
which gives me this :
At this point, the color scale of the graph is attributed depending on the value of the magnitude |f(x,y,z)|, so that a single isosurface is always uniform in color.
Now, instead to have a color scale mapped on the magnitude |f(x,y,z)|, I would like it to be mapped on the value of the phase Ф(x,y,z) = arg(f(x,y,z)), so that the color of each point of a ploted isosurface tells us about the value of the field Ф(x,y,z) (which would be distributed on [-π,π] ideally) instead of |f(x,y,z)| in thsi point.
Basically, I would like to do this with Plotly instead of Mayavi if it's possible.
It seems to me that all of that has something to do with a special way to set the cmin and cmax parameters of the function Isosurface, but I can't figure out how to do this.
As #gnodab mentioned in his comment, plotly isosurfaces do not really support colouring the surfaces by a fifth dimension (at least there is no obvious way to do it). I am also not sure if it might be possible to extract the data describing the isosurface somehow to be re-plotted as a regular surface.
In this post, however, they describe how to generate an isosurface with skimage.measure.marching_cubes_lewiner which is then plotted and coloured by a custom colorscale with plotly as 'mesh3d' trace. This might be what you want. If I find the time, I'll give that a try and edit my answer later.
Given #Jan Joswig's answer and the link they provided, the quick/compact way of doing it will be:
import plotly.graph_objects as go
from skimage import measure
import numpy as np
xyz_shape = vol.shape
verts, faces = measure.marching_cubes(vol, .5)[:2] # iso-surface at .5 level
x, y, z = verts.T
I, J, K = faces.T
fig = go.Figure(
data=[go.Mesh3d(
x=x,
y=y,
z=z,
color='lightpink',
opacity=0.50,
i=I,
j=J,
k=K, )])
fig.show()

Rearrange data in two-dimensional array according to transformation from polar to Cartesian coordinates

I have a two-dimensional array that represents function values at positions in a polar coordinate system. For example:
import numpy as np
radius = np.linspace(0, 1, 50)
angle = np.linspace(0, 2*np.pi, radius.size)
r_grid, a_grid = np.meshgrid(radius, angle)
data = np.sqrt((r_grid/radius.max())**2
+ (a_grid/angle.max())**2)
Here the data is arranged in a rectangular grid corresponding to the polar coordinates. I want to rearrange the data in the array such that the axes represent the corresponding Cartesian coordinate system. The old versus new layout can be visualized as follows:
import matplotlib.pyplot as plt
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=plt.figaspect(0.5))
ax1.set(title='Polar coordinates', xlabel='Radius', ylabel='Angle')
ax1.pcolormesh(r_grid, a_grid, data)
ax2.set(title='Cartesian coordinates', xlabel='X', ylabel='Y')
x_grid = r_grid * np.cos(a_grid)
y_grid = r_grid * np.sin(a_grid)
ax2.pcolormesh(x_grid, y_grid, data)
Here the coordinates are explicitly given and the plot is adjusted accordingly. I want the data to be rearranged in the data array itself instead. It should contain all values, optionally filling with zeros to fit the shape (similar to scipy.ndimage.rotate(..., reshape=True)).
If I manually loop over the polar arrays to compute the Cartesian coordinates, the result contains empty regions which ideally should be filled as well:
new = np.zeros_like(data)
visits = np.zeros_like(new)
for r, a, d in np.nditer((r_grid, a_grid, data)):
i = 0.5 * (1 + r * np.sin(a)) * new.shape[0]
j = 0.5 * (1 + r * np.cos(a)) * new.shape[1]
i = min(int(i), new.shape[0] - 1)
j = min(int(j), new.shape[1] - 1)
new[i, j] += d
visits[i, j] += 1
new /= np.maximum(visits, 1)
ax2.imshow(new, origin='lower')
Is there a way to achieve the transformation while avoiding empty regions in the resulting data array?
tl;dr: No, not without changing some conditions of your problem.
The artefact you are seeing is a property of the transformation.
It is not due to the fixed resolution in angle for all radii.
Hence, it is not due to a wrong or bad implementation of the transformation.
The Cartesian grid simply implies a higher special resolution at these areas as there are resolved points from the polar map.
The only "clean" way (that I can think of right now) to handle this is to have an adjustable resolution in the polar coordinates to account for the 1/r scaling. (If you input data allows it)
A somewhat cheating way of visualizing this without the gaps would to randomly distribute them over the gaps. The argument here is, that you do not have the resolution to decide in which bin they were to begin with. So you could just randomly throw them in one which might have been a possible origin and not throw them all in same one (as you are doing right now).
However, I would like discourage this stronlgy. It just gives you a prettier plot.
Note, that this is somewhat equivalent to the behaviour of the upper right plot in your question.
This doesn't really give the expected result, but maybe will help you in achieving a solution after some needed correction...
import numpy as np
radius = np.linspace(0, 1, 50)
angle = np.linspace(0, 2*np.pi, radius.size)
r_grid, a_grid = np.meshgrid(radius, angle)
data = np.sqrt((r_grid/radius.max())**2
+ (a_grid/angle.max())**2)
def polar_to_cartesian(data):
new = np.zeros_like(data) * np.nan
x = np.linspace(-1, 1, new.shape[1])
y = np.linspace(-1, 1, new.shape[0])
for i in range(new.shape[0]):
for j in range(new.shape[1]):
x0, y0 = x[j], y[i]
r, a = np.sqrt(x0**2 + y0**2), np.arctan2(y0, x0)
data_i = np.argmin(np.abs(a_grid[:, 0] - a))
data_j = np.argmin(np.abs(r_grid[0, :] - r))
val = data[data_i, data_j]
if r <= 1:
new[i, j] = val
return new
new = polar_to_cartesian(data)
fig, ax = plt.subplots()
ax.imshow(new, origin='lower')
EDIT:
Modified using np.arctan2 according to the suggestions of OP.
You could loop over the Cartesian array, transforming each grid point to polar coordinates and approximating the function value by interpolation from your polar grid data. You may still want to leave the corner regions blank though, for lack of close enough data.
I don't think there is a better way, unless of course you have access to the original function.

How to generate a random sample of points from a 3-D ellipsoid using Python?

I am trying to sample around 1000 points from a 3-D ellipsoid, uniformly. Is there some way to code it such that we can get points starting from the equation of the ellipsoid?
I want points on the surface of the ellipsoid.
Theory
Using this excellent answer to the MSE question How to generate points uniformly distributed on the surface of an ellipsoid? we can
generate a point uniformly on the sphere, apply the mapping f :
(x,y,z) -> (x'=ax,y'=by,z'=cz) and then correct the distortion
created by the map by discarding the point randomly with some
probability p(x,y,z).
Assuming that the 3 axes of the ellipsoid are named such that
0 < a < b < c
We discard a generated point with
p(x,y,z) = 1 - mu(x,y,y)/mu_max
probability, ie we keep it with mu(x,y,y)/mu_max probability where
mu(x,y,z) = ((acy)^2 + (abz)^2 + (bcx)^2)^0.5
and
mu_max = bc
Implementation
import numpy as np
np.random.seed(42)
# Function to generate a random point on a uniform sphere
# (relying on https://stackoverflow.com/a/33977530/8565438)
def randompoint(ndim=3):
vec = np.random.randn(ndim,1)
vec /= np.linalg.norm(vec, axis=0)
return vec
# Give the length of each axis (example values):
a, b, c = 1, 2, 4
# Function to scale up generated points using the function `f` mentioned above:
f = lambda x,y,z : np.multiply(np.array([a,b,c]),np.array([x,y,z]))
# Keep the point with probability `mu(x,y,z)/mu_max`, ie
def keep(x, y, z, a=a, b=b, c=c):
mu_xyz = ((a * c * y) ** 2 + (a * b * z) ** 2 + (b * c * x) ** 2) ** 0.5
return mu_xyz / (b * c) > np.random.uniform(low=0.0, high=1.0)
# Generate points until we have, let's say, 1000 points:
n = 1000
points = []
while len(points) < n:
[x], [y], [z] = randompoint()
if keep(x, y, z):
points.append(f(x, y, z))
Checks
Check if all points generated satisfy the ellipsoid condition (ie that x^2/a^2 + y^2/b^2 + z^2/c^2 = 1):
for p in points:
pscaled = np.multiply(p,np.array([1/a,1/b,1/c]))
assert np.allclose(np.sum(np.dot(pscaled,pscaled)),1)
Runs without raising any errors. Visualize the points:
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(projection="3d")
points = np.array(points)
ax.scatter(points[:, 0], points[:, 1], points[:, 2])
# set aspect ratio for the axes using https://stackoverflow.com/a/64453375/8565438
ax.set_box_aspect((np.ptp(points[:, 0]), np.ptp(points[:, 1]), np.ptp(points[:, 2])))
plt.show()
These points seem evenly distributed.
Problem with currently accepted answer
Generating a point on a sphere and then just reprojecting it without any further corrections to an ellipse will result in a distorted distribution. This is essentially the same as setting this posts's p(x,y,z) to 0. Imagine an ellipsoid where one axis is orders of magnitude bigger than another. This way, it is easy to see, that naive reprojection is not going to work.
Consider using Monte-Carlo simulation: generate a random 3D point; check if the point is inside the ellipsoid; if it is, keep it. Repeat until you get 1,000 points.
P.S. Since the OP changed their question, this answer is no longer valid.
J.F. Williamson, "Random selection of points distributed on curved surfaces", Physics in Medicine & Biology 32(10), 1987, describes a general method of choosing a uniformly random point on a parametric surface. It is an acceptance/rejection method that accepts or rejects each candidate point depending on its stretch factor (norm-of-gradient). To use this method for a parametric surface, several things have to be known about the surface, namely—
x(u, v), y(u, v) and z(u, v), which are functions that generate 3-dimensional coordinates from two dimensional coordinates u and v,
The ranges of u and v,
g(point), the norm of the gradient ("stretch factor") at each point on the surface, and
gmax, the maximum value of g for the entire surface.
The algorithm is then:
Generate a point on the surface, xyz.
If g(xyz) >= RNDU01()*gmax, where RNDU01() is a uniform random variate in [0, 1), accept the point. Otherwise, repeat this process.
Chen and Glotzer (2007) apply the method to the surface of a prolate spheroid (one form of ellipsoid) in "Simulation studies of a phenomenological model for elongated virus capsid formation", Physical Review E 75(5), 051504 (preprint).
Here is a generic function to pick a random point on a surface of a sphere, spheroid or any triaxial ellipsoid with a, b and c parameters. Note that generating angles directly will not provide uniform distribution and will cause excessive population of points along z direction. Instead, phi is obtained as an inverse of randomly generated cos(phi).
import numpy as np
def random_point_ellipsoid(a,b,c):
u = np.random.rand()
v = np.random.rand()
theta = u * 2.0 * np.pi
phi = np.arccos(2.0 * v - 1.0)
sinTheta = np.sin(theta);
cosTheta = np.cos(theta);
sinPhi = np.sin(phi);
cosPhi = np.cos(phi);
rx = a * sinPhi * cosTheta;
ry = b * sinPhi * sinTheta;
rz = c * cosPhi;
return rx, ry, rz
This function is adopted from this post: https://karthikkaranth.me/blog/generating-random-points-in-a-sphere/
One way of doing this whch generalises for any shape or surface is to convert the surface to a voxel representation at arbitrarily high resolution (the higher the resolution the better but also the slower). Then you can easily select the voxels randomly however you want, and then you can select a point on the surface within the voxel using the parametric equation. The voxel selection should be completely unbiased, and the selection of the point within the voxel will suffer the same biases that come from using the parametric equation but if there are enough voxels then the size of these biases will be very small.
You need a high quality cube intersection code but with something like an elipsoid that can optimised quite easily. I'd suggest stepping through the bounding box subdivided into voxels. A quick distance check will eliminate most cubes and you can do a proper intersection check for the ones where an intersection is possible. For the point within the cube I'd be tempted to do something simple like a random XYZ distance from the centre and then cast a ray from the centre of the elipsoid and the selected point is where the ray intersects the surface. As I said above, it will be biased but with small voxels, the bias will probably be small enough.
There are libraries that do convex shape intersection very efficiently and cube/elipsoid will be one of the options. They will be highly optimised but I think the distance culling would probably be worth doing by hand whatever. And you will need a library that differentiates between a surface intersection and one object being totally inside the other.
And if you know your elipsoid is aligned to an axis then you can do the voxel/edge intersection very easily as a stack of 2D square intersection elipse problems with the set of squares to be tested defined as those that are adjacent to those in the layer above. That might be quicker.
One of the things that makes this approach more managable is that you do not need to write all the code for edge cases (it is a lot of work to get around issues with floating point inaccuracies that can lead to missing or doubled voxels at the intersection). That's because these will be very rare so they won't affect your sampling.
It might even be quicker to simply find all the voxels inside the elipse and then throw away all the voxels with 6 neighbours... Lots of options. It all depends how important performance is. This will be much slower than the opther suggestions but if you want ~1000 points then ~100,000 voxels feels about the minimum for the surface, so you probably need ~1,000,000 voxels in your bounding box. However even testing 1,000,000 intersections is pretty fast on modern computers.
Depending on what "uniformly" refers to, different methods are applicable. In any case, we can use the parametric equations using spherical coordinates (from Wikipedia):
where s = 1 refers to the ellipsoid given by the semi-axes a > b > c. From these equations we can derive the relevant volume/area element and generate points such that their probability of being generated is proportional to that volume/area element. This will provide constant volume/area density across the surface of the ellipsoid.
1. Constant volume density
This method generates points on the surface of an ellipsoid such that their volume density across the surface of the ellipsoid is constant. A consequence of this is that the one-dimensional projections (i.e. the x, y, z coordinates) are uniformly distributed; for details see the plot below.
The volume element for a triaxial ellipsoid is given by (see here):
and is thus proportional to sin(theta) (for 0 <= theta <= pi). We can use this as the basis for a probability distribution that indicates "how many" points should be generated for a given value of theta: where the area density is low/high, the probability for generating a corresponding value of theta should be low/high, too.
Hence, we can use the function f(theta) = sin(theta)/2 as our probability distribution on the interval [0, pi]. The corresponding cumulative distribution function is F(theta) = (1 - cos(theta))/2. Now we can use Inverse transform sampling to generate values of theta according to f(theta) from a uniform random distribution. The values of phi can be obtained directly from a uniform distribution on [0, 2*pi].
Example code:
import matplotlib.pyplot as plt
import numpy as np
from numpy import sin, cos, pi
rng = np.random.default_rng(seed=0)
a, b, c = 10, 3, 1
N = 5000
phi = rng.uniform(0, 2*pi, size=N)
theta = np.arccos(1 - 2*rng.random(size=N))
x = a*sin(theta)*cos(phi)
y = b*sin(theta)*sin(phi)
z = c*cos(theta)
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
ax.scatter(x, y, z, s=2)
plt.show()
which produces the following plot:
The following plot shows the one-dimensional projections (i.e. density plots of x, y, z):
import seaborn as sns
sns.kdeplot(data=dict(x=x, y=y, z=z))
plt.show()
2. Constant area density
This method generates points on the surface of an ellipsoid such that their area density is constant across the surface of the ellipsoid.
Again, we start by calculating the corresponding area element. For simplicity we can use SymPy:
from sympy import cos, sin, symbols, Matrix
a, b, c, t, p = symbols('a b c t p')
x = a*sin(t)*cos(p)
y = b*sin(t)*sin(p)
z = c*cos(t)
J = Matrix([
[x.diff(t), x.diff(p)],
[y.diff(t), y.diff(p)],
[z.diff(t), z.diff(p)],
])
print((J.T # J).det().simplify())
This yields
-a**2*b**2*sin(t)**4 + a**2*b**2*sin(t)**2 + a**2*c**2*sin(p)**2*sin(t)**4 - b**2*c**2*sin(p)**2*sin(t)**4 + b**2*c**2*sin(t)**4
and further simplifies to (dividing by (a*b)**2 and taking the sqrt):
sin(t)*np.sqrt(1 + ((c/b)**2*sin(p)**2 + (c/a)**2*cos(p)**2 - 1)*sin(t)**2)
Since for this case the area element is more complex, we can use rejection sampling:
import matplotlib.pyplot as plt
import numpy as np
from numpy import cos, sin
def f_redo(t, p):
return (
sin(t)*np.sqrt(1 + ((c/b)**2*sin(p)**2 + (c/a)**2*cos(p)**2 - 1)*sin(t)**2)
< rng.random(size=t.size)
)
rng = np.random.default_rng(seed=0)
N = 5000
a, b, c = 10, 3, 1
t = rng.uniform(0, np.pi, size=N)
p = rng.uniform(0, 2*np.pi, size=N)
redo = f_redo(t, p)
while redo.any():
t[redo] = rng.uniform(0, np.pi, size=redo.sum())
p[redo] = rng.uniform(0, 2*np.pi, size=redo.sum())
redo[redo] = f_redo(t[redo], p[redo])
x = a*np.sin(t)*np.cos(p)
y = b*np.sin(t)*np.sin(p)
z = c*np.cos(t)
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
ax.scatter(x, y, z, s=2)
plt.show()
which yields the following distribution:
The following plot shows the corresponding one-dimensional projections (x, y, z):

Pixels and geometrical shapes - Python/PIL

I'm trying to build a basic heatmap based on points. Each point has a heat radius, and therefore is represented by a circle.
Problem is that the circle needs to be converted in a list of pixels colored based on the distance from the circle's center.
Finding it hard to find an optimal solution for many points, what I have for now is something similar to this:
for pixels in pixels:
if (pixel.x - circle.x)**2 + (pixel.y - circle.y)**2 <= circle.radius:
pixel.set_color(circle.color)
Edit:
data I have:
pixel at the center of the circle
circle radius (integer)
Any tips?
Instead of doing it pixel-by-pixel, use a higher level interface with anti-aliasing, like the aggdraw module and its ellipse(xy, pen, brush) function.
Loop over the number of color steps you want (lets say, radius/2) and use 255/number_of_steps*current_step as the alpha value for the fill color.
For plotting it is usually recommended to use the matplotlib library (e.g. using imshow for heatmaps). Of course matplotlib also supports color gradients.
However, I don't really understand what you are trying to accomplish. If you just want to draw a bunch of colored circles then pretty much any graphics library will do (e.g. using the ellipse function in PIL).
It sounds like you want to color the pixel according to their distance from the center, but your own example code suggests that the color is constant?
If you are handling your pixels by yourself and your point is to increase performances, you can just focus on the square [x - radius; x + radius] * [y - radius; y + radius] since the points of your circle live here. That will save you a lot of useless iterations, if of course you CAN focus on this region (i.e. your pixels are not just an array without index per line and column).
You can even be sure that the pixels in the square [x - radius*sqrt(2)/2; x + radius*sqrt(2)/2] * [y - radius*sqrt(2)/2; y + radius*sqrt(2)/2] must be colored, with basic trigonometry (maximum square inside the circle).
So you could do:
import math
half_sqrt = math.sqrt(2) / 2
x_max = x + half_sqrt
y_max = y + half_sqrt
for (i in range(x, x + radius + 1):
for (j in range(y, y + radius + 1):
if (x <= x_max and y <= y_max):
colorize_4_parts(i, j)
else:
pixel = get_pixel(i, j)
if (pixel.x - circle.x)**2 + (pixel.y - circle.y)**2 <= circle.radius:
# Apply same colors as above, could be a function
colorize_4_parts(i, j)
def colorize_4_parts(i, j):
# Hoping you have access to such a function get_pixel !
pixel_top_right = get_pixel(i, j)
pixel_top_right.set_color(circle.color)
pixel_top_left = get_pixel(2 * x - i, j)
pixel_top_leftt.set_color(circle.color)
pixel_bot_right = get_pixel(i, 2 * y - j)
pixel_bot_right.set_color(circle.color)
pixel_bot_left = get_pixel(2 * x - i, 2 * y - j)
pixel_bot_leftt.set_color(circle.color)
This is optimized to reduce costly computations to the minimum.
EDIT: function updated to be more efficient again: I had forgotten that we had a double symetry horizontal and vertical, so we can compute only for the top right corner !
This is a very common operation, and here's how people do it...
summary: Represent the point density on a grid, smooth this using a 2D convolution if needed (this gives your points to circles), and plot this as a heatmap using matplotlib.
In more detail: First, make a 2D grid for your heatmap, and add your data points to the grid, incrementing by the cells by 1 when a data point lands in the cell. Second, make another grid to represents the shape you want to give each point (usually people use a cylinder or gaussian or something like this). Third, convolve these two together, using, say scipy.signal.convolve2d. Finally, use matplotlib's imshow function to plot the convolution, and this will be your heatmap.
If you can't use the tools suggested in the standard approach then you might find work-arounds, but it has advantages. For example, the convolution will deal well with cases when the circles overlap.

Categories

Resources