Creating a structured non-rectangular mesh in python - python

I am trying to model the temperature distribution in a rocket nozzle, I am doing this with the heat equation solved using finite difference method. In order to apply this method I need my mesh to fit the geometry of my modelled rocket engine. I have trouble doing this as the meshgrid function seems to only work for rectangular mesh's.
I tried using the code bellow, and want the mesh to only go up to the red line, but cannot find a way of doing this.
import numpy as np
import matplotlib.pyplot as pl
from mpl_toolkits import mplot3d
# Grid spacing size
Dx = 0.1
L = 10
L_T = 2
D_T = 1
D_C = 2
D_E = 5
maxtime = 10
# forming the x grid spacing
x = np.arange(0,L,Dx)
y = np.zeros(len(x))
# factor to scale loop so have integers
m = 10
# gradient of the engine line
M = (D_C-D_T)/(L_T)
#inital height of engine is at combustion point.
y[0] = D_C
for i in range(1,int(L_T*m),int(Dx*m)):
y[i] = y[i-1] - M*Dx
#Y[i] = np.arange(0,y[i])
for i in range(int(L_T*m),int(L*m),int(Dx*m)):
y[i] = y[i-1] + M*Dx
# set meshgrids
Xg , Yg = np.meshgrid(x,y)
pl.scatter(Xg,Yg)
pl.plot(x,y,c='red')

Related

Animating Brownian particle movement with Python?

I am attempting to create a Langevin simulation using python. Currently, I have code which updates the x and y coordinates of a single point based on the Langevin equations, and returns all of these positions in two arrays (x array and y array). I want to plot these changes in position as a moving scatter plot to show how the particle moves over time. How can I do this? Below is what I have so far:
# IMPORT STATEMENTS
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
# CONSTANTS
v = 3.12e-5 # swimming speed of B. Subtilis [m/s]
M = 1 # moment of B. Subtilis
k = 1.38e-23 # Boltzmann constant [m^2kg/s^2K]
T = 293 # Room temperature [K]
eta = 0.1 # viscosity of water [Pa s]
a = 2e-6 # spherical cell radius [m]
Dr = k*T/8*np.pi*eta*a**3 # rotational diffusion coefficient of B. Subtilis
# ADJUSTABLE PARAMETERS
B = 1 # strength of the magnetic field [T]
t = 100 # time over which motion is observed [s]
dt = 1 # time step between recorded positions
N = 1000 # number of cells
#INITIAL CONDITIONS
theta_i = 0 # initial swimming orientation [radians]
xi = 0.001 # initial x position [m]
yi = 0.001 # initial y position [m]
x = []
y = []
# MAIN SCRIPT
for i in range (0,t,dt):
theta_j = (theta_i + M*B*np.sin(theta) + np.sqrt(2*Dr)*ksi)*dt
xj = (xi + v*np.cos(theta))*dt
yj = (yi + v*np.sin(theta))*dt
x.append(xj)
y.append(yj)
theta_i = theta_j
xi = xj
yi = yj
take a look at this:
https://github.com/mohammadjafariph/Brownian-Motion-Simulation
animated versions are available

How to rotate a cylinder without causing a 'sheared' appearance

I have plotted a 'tear drop' shaped cylinder in matplotlib. To obtain the tear drop shape I plotted a normal cylinder from theta = 0 to theta = pi and an ellipse from theta = pi to theta = 2pi. However I am now trying to 'spin' the cylinder around it's axis which here is given conveniently by the z-axis.
I tried using the rotation matrix for rotating around the z-axis which Wikipedia gives as:
However when I try to rotate through -pi/3 radians, the cylinder becomes very disfigured.
Is there anyway to prevent this from happening?
Here is my code:
import numpy as np
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from math import sin, cos, pi
import math
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
theta = np.linspace(0,2*pi, 1200)
Z = np.linspace(0,5,1000+600)
Z,theta = np.meshgrid(Z, theta)
X = []
Y = []
R = 0.003
#calculate the x and y values
for i in theta:
cnt = 0
tempX = []
tempY = []
for j in i:
#circle
if(i[0]<=pi):
tempX.append(R*cos(j))
tempY.append(R*sin(j))
cnt+=1
#ellipse
else:
tempX.append(R*cos(j))
tempY.append(0.006*sin(j))
X.append(tempX)
Y.append(tempY)
X1 = np.array(X)
Y1 = np.array(Y)
#rotate around the Z axis
a = -pi/3
for i in range(len(X)):
X1[i] = cos(a)*X1[i]-sin(a)*Y1[i]
Y1[i] = sin(a)*X1[i]+cos(a)*Y1[i]
#plot
ax.plot_surface(X1,Y1,Z,linewidth = 0, shade = True, alpha = 0.3)
ax.set_xlim(-0.01,0.01)
ax.set_ylim(-0.01, 0.01)
azimuth = 173
elevation = 52
ax.view_init(elevation, azimuth)
plt.show()
Your rotating is flawed: To calculate Y1[i] you need the old value of X1[i], but you already updated it. You can try something like
X1[i], Y1[i] = cos(a)*X1[i]-sin(a)*Y1[i], sin(a)*X1[i]+cos(a)*Y1[i]
if you want to make the matrix multiplication a bit more obvious (and fix the bug) you could also do the following (please doublecheck that the matrix is correct and that the multiplication is in the right order, I did not test this):
rotation_matrix = np.array([[cos(a), -sin(a)], [sin(a), cos(a)]])
x, y = zip(*[(x,y) # rotation_matrix for x,y in zip(x,y)])
the # is new in 3.5 and for numpy array it's defined to be the matrix multiplication. If you are on a version below 3.5 you can use np.dot.
The zip(*...) is necessary to get a pair of lists instead of a list of pairs. See also this answer

Using FiPy and Mayavi to solve the diffusion equation in 3D

I'm interested in solving,
\frac{\delta \phi}{\delta t} - D \nabla^2 \phi - \alpha \phi - \gamma \phi = 0
The following is working, but I have a few questions:
Is it possible to increase performance with FiPy? I feel like the nx, ny, nz bins are very small here, despite a long computation time. I don't understand why the arrays X, Y, and Z are so large.
Notice in the first frame, we are zoomed in. How can I force the extents to automatically be [0..nx, 0..ny, 0..nz] in all plots?
Data for the first frame is a sphere of points with values 1.0 surrounded by 0.0. Why does there appear to be a gradient? Is Mayavi interpolating? If so, how can I disable this?
Code:
from fipy import *
import mayavi.mlab as mlab
import numpy as np
import time
# Spatial parameters
nx = ny = nz = 30 # bins
dx = dy = dz = 1 # Must this be an integer?
L = nx * dx
# Diffusion and time step
D = 1.
dt = 10.0 * dx**2 / (2. * D)
steps = 4
# Initial value and radius of concentration
phi0 = 1.0
r = 3.0
# Rates
alpha = 1.0 # Source coeficcient
gamma = .01 # Sink coeficcient
mesh = Grid3D(nx=nx, ny=ny, nz=nz, dx=dx, dy=dy, dz=dz)
X, Y, Z = mesh.cellCenters # These are large arrays
phi = CellVariable(mesh=mesh, name=r"$\phi$", value=0.)
src = phi * alpha # Source term (zeroth order reaction)
degr = -gamma * phi # Sink term (degredation)
eq = TransientTerm() == DiffusionTerm(D) + src + degr
# Initial concentration is a sphere located in the center of a bounded cube
phi.setValue(1.0, where=( ((X-nx/2))**2 + (Y-ny/2)**2 + (Z-nz/2)**2 < r**2) )
# Solve
start_time = time.time()
results = [phi.getNumericValue().copy()]
for step in range(steps):
eq.solve(var=phi, dt=dt)
results.append(phi.getNumericValue().copy())
print 'Time elapsed:', time.time() - start_time
# Plot
for i, res in enumerate(results):
fig = mlab.figure()
res = res.reshape(nx, ny, nz)
mlab.contour3d(res, opacity=.3, vmin=0, vmax=1, contours=100, transparent=True, extent=[0, 10, 0, 10, 0, 10])
mlab.colorbar()
mlab.savefig('diffusion3d_%i.png'%(i+1))
mlab.close()
Time elapsed: 68.2 seconds
It's hard to tell from your question, but in the course of diagnosing things, I discovered that the LinearLUSolver scales very poorly as the dimension of the problem increases (see https://github.com/usnistgov/fipy/issues/474).
For this symmetric problem, PySparse should use the PCG solver and Trilinos should use GMRES. If you didn't install either of these, then you'll get the SciPy sparse solvers, which defaults to LU (I don't know why; something for us to look into), and things will be really slow in 3D. Try adding solver=LinearGMRESSolver() to your eq.solve(...) statement.
As far as the size of X, Y, and Z, you've declared a 30*30*30 cube of cells, so each of the cell center coordinate vectors will be 27000 elements long. Did you have a different expectation for cellCenters?
I suggest you subclass our MayaviDaemon class, or at least look at how it sets up the display in Mayavi. In short, we set a data_set_clipper to the desired bounds.
I don't know.

Using set_array with pyplot.pcolormesh ruins figure

I have a xx and yy matrix created with np.meshgrid and a grid matrix of values created by operating on xx and yy. Then I plot the results using graph = plt.pcolormesh(... and get this:
Then when I try to update the grid matrix in the plot using graph.set_array(grid.ravel()) it causes the figure to be thwarted.
Does anyone know how to avoid this?
Here is my full code if it helps:
from pylab import *
import numpy as np
import matplotlib.pyplot as plt
from obspy import read
dx = 5 # km
dt = 5 # sec
nx = 500
ny = 500
v = 3.5 # km/s
p = 1/v
t_min = np.int_(np.sqrt((nx*dx)**2 + (ny*dx)**2))/v
nt = np.int_(t_min/dt)
# Receiver position
rx = 40 * dx
ry = 40 * dx
# Signal with ones
signal = np.zeros(2*nt)
signal[0:len(signal):len(signal)/10] = 1
# Create grid:
x_vector = np.arange(0, nx)*dx
y_vector = np.arange(0, ny)*dx
xx, yy = np.meshgrid(x_vector, y_vector)
# Distance from source to grid point
rr = np.int_(np.sqrt((xx - rx)**2 + (yy - ry)**2))
# travel time grid
tt_int = np.int_(rr/v)
tt = rr/v
# Read window of signal
wlen = np.int_(t_min/dt)
signal_window = signal[0:wlen]
grid = signal_window[tt_int/dt]
ax = plt.subplot(111)
graph = plt.pcolormesh(xx, yy, grid, cmap=mpl.cm.Reds)
plt.colorbar()
plt.plot(rx, ry, 'rv', markersize=10)
plt.xlabel('km')
plt.ylabel('km')
# plt.savefig('anitestnormal.png', bbox_inches='tight')
signal_window = signal[wlen:wlen * 2]
grid = signal_window[tt_int/dt]
graph.set_array(grid.ravel())
# plt.ion()
plt.show()
This is rather tricky ... but I believe that your assertion about the dimensions is correct. It has to do with how pcolormesh creates the QuadMesh object.
The documentation states that:
A quadrilateral mesh is represented by a (2 x ((meshWidth + 1) * (meshHeight + 1))) numpy array coordinates
In this context, meshWidth is your xx, meshHeight is your yy. When you set the array explicitly using set_array, pcolormesh wants to interpret it directly as the (meshWidth x meshHeight) quadrilaterals, and therefore requires one less point in each dimension.
When I tested it, I got the following behavior - if you change
graph.set_array(grid.ravel())
to
graph.set_array(grid[:-1,:-1].ravel())
your plot will look like it should.
In the code, it looks like in the initial call to pcolormesh, if xx and yy are given, they should actually be defined to have one more point in each dimension than the value array, and if they don't (are off by 1), then the array is truncated by one value automatically. So, you should get the same answer even if you use grid[:-1,:-1] in the first call as well.

Method to uniformly randomly populate a disk with points in python

I have an application that requires a disk populated with 'n' points in a quasi-random fashion. I want the points to be somewhat random, but still have a more or less regular density over the disk.
My current method is to place a point, check if it's inside the disk, and then check if it is also far enough away from all other points already kept. My code is below:
import os
import random
import math
# ------------------------------------------------ #
# geometric constants
center_x = -1188.2
center_y = -576.9
center_z = -3638.3
disk_distance = 2.0*5465.6
disk_diam = 5465.6
# ------------------------------------------------ #
pts_per_disk = 256
closeness_criteria = 200.0
min_closeness_criteria = disk_diam/closeness_criteria
disk_center = [(center_x-disk_distance),center_y,center_z]
pts_in_disk = []
while len(pts_in_disk) < (pts_per_disk):
potential_pt_x = disk_center[0]
potential_pt_dy = random.uniform(-disk_diam/2.0, disk_diam/2.0)
potential_pt_y = disk_center[1]+potential_pt_dy
potential_pt_dz = random.uniform(-disk_diam/2.0, disk_diam/2.0)
potential_pt_z = disk_center[2]+potential_pt_dz
potential_pt_rad = math.sqrt((potential_pt_dy)**2+(potential_pt_dz)**2)
if potential_pt_rad < (disk_diam/2.0):
far_enough_away = True
for pt in pts_in_disk:
if math.sqrt((potential_pt_x - pt[0])**2+(potential_pt_y - pt[1])**2+(potential_pt_z - pt[2])**2) > min_closeness_criteria:
pass
else:
far_enough_away = False
break
if far_enough_away:
pts_in_disk.append([potential_pt_x,potential_pt_y,potential_pt_z])
outfile_name = "pt_locs_x_lo_"+str(pts_per_disk)+"_pts.txt"
outfile = open(outfile_name,'w')
for pt in pts_in_disk:
outfile.write(" ".join([("%.5f" % (pt[0]/1000.0)),("%.5f" % (pt[1]/1000.0)),("%.5f" % (pt[2]/1000.0))])+'\n')
outfile.close()
In order to get the most even point density, what I do is basically iteratively run this script using another script, with the 'closeness' criteria reduced for each successive iteration. At some point, the script can not finish, and I just use the points of the last successful iteration.
So my question is rather broad: is there a better way to do this? My method is ok for now, but my gut says that there is a better way to generate such a field of points.
An illustration of the output is graphed below, one with a high closeness criteria, and another with a 'lowest found' closeness criteria (what I want).
A simple solution based on Disk Point Picking from MathWorld:
import numpy as np
import matplotlib.pyplot as plt
n = 1000
r = np.random.uniform(low=0, high=1, size=n) # radius
theta = np.random.uniform(low=0, high=2*np.pi, size=n) # angle
x = np.sqrt(r) * np.cos(theta)
y = np.sqrt(r) * np.sin(theta)
# for plotting circle line:
a = np.linspace(0, 2*np.pi, 500)
cx,cy = np.cos(a), np.sin(a)
fg, ax = plt.subplots(1, 1)
ax.plot(cx, cy,'-', alpha=.5) # draw unit circle line
ax.plot(x, y, '.') # plot random points
ax.axis('equal')
ax.grid(True)
fg.canvas.draw()
plt.show()
It gives.
Alternatively, you also could create a regular grid and distort it randomly:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.tri as tri
n = 20
tt = np.linspace(-1, 1, n)
xx, yy = np.meshgrid(tt, tt) # create unit square grid
s_x, s_y = xx.ravel(), yy.ravel()
ii = np.argwhere(s_x**2 + s_y**2 <= 1).ravel() # mask off unwanted points
x, y = s_x[ii], s_y[ii]
triang = tri.Triangulation(x, y) # create triangluar grid
# distort the grid
g = .5 # distortion factor
rx = x + np.random.uniform(low=-g/n, high=g/n, size=x.shape)
ry = y + np.random.uniform(low=-g/n, high=g/n, size=y.shape)
rtri = tri.Triangulation(rx, ry, triang.triangles) # distorted grid
# for circle:
a = np.linspace(0, 2*np.pi, 500)
cx,cy = np.cos(a), np.sin(a)
fg, ax = plt.subplots(1, 1)
ax.plot(cx, cy,'k-', alpha=.2) # circle line
ax.triplot(triang, "g-", alpha=.4)
ax.triplot(rtri, 'b-', alpha=.5)
ax.axis('equal')
ax.grid(True)
fg.canvas.draw()
plt.show()
It gives
The triangles are just there for visualization. The obvious disadvantage is that depending on your choice of grid, either in the middle or on the borders (as shown here), there will be more or less large "holes" due to the grid discretization.
If you have a defined area like a disc (circle) that you wish to generate random points within you are better off using an equation for a circle and limiting on the radius:
x^2 + y^2 = r^2 (0 < r < R)
or parametrized to two variables
cos(a) = x/r
sin(a) = y/r
sin^2(a) + cos^2(a) = 1
To generate something like the pseudo-random distribution with low density you should take the following approach:
For randomly distributed ranges of r and a choose n points.
This allows you to generate your distribution to roughly meet your density criteria.
To understand why this works imagine your circle first divided into small rings of length dr, now imagine your circle divided into pie slices of angle da. Your randomness now has equal probability over the whole boxed area arou d the circle. If you divide the areas of allowed randomness throughout your circle you will get a more even distribution around the overall circle and small random variation for the individual areas giving you the psudo-random look and feel you are after.
Now your job is just to generate n points for each given area. You will want to have n be dependant on r as the area of each division changes as you move out of the circle. You can proportion this to the exact change in area each space brings:
for the n-th to n+1-th ring:
d(Area,n,n-1) = Area(n) - Area(n-1)
The area of any given ring is:
Area = pi*(dr*n)^2 - pi*(dr*(n-1))
So the difference becomes:
d(Area,n,n-1) = [pi*(dr*n)^2 - pi*(dr*(n-1))^2] - [pi*(dr*(n-1))^2 - pi*(dr*(n-2))^2]
d(Area,n,n-1) = pi*[(dr*n)^2 - 2*(dr*(n-1))^2 + (dr*(n-2))^2]
You could expound this to gain some insight on how much n should increase but it may be faster to just guess at some percentage increase (30%) or something.
The example I have provided is a small subset and decreasing da and dr will dramatically improve your results.
Here is some rough code for generating such points:
import random
import math
R = 10.
n_rings = 10.
n_angles = 10.
dr = 10./n_rings
da = 2*math.pi/n_angles
base_points_per_division = 3
increase_per_level = 1.1
points = []
ring = 0
while ring < n_rings:
angle = 0
while angle < n_angles:
for i in xrange(int(base_points_per_division)):
ra = angle*da + da*math.random()
rr = r*dr + dr*random.random()
x = rr*math.cos(ra)
y = rr*math.sin(ra)
points.append((x,y))
angle += 1
base_points_per_division = base_points_per_division*increase_per_level
ring += 1
I tested it with the parameters:
n_rings = 20
n_angles = 20
base_points = .9
increase_per_level = 1.1
And got the following results:
It looks more dense than your provided image, but I imagine further tweaking of those variables could be beneficial.
You can add an additional part to scale the density properly by calculating the number of points per ring.
points_per_ring = densitymath.pi(dr**2)*(2*n+1)
points_per_division = points_per_ring/n_angles
This will provide a an even better scaled distribution.
density = .03
points = []
ring = 0
while ring < n_rings:
angle = 0
base_points_per_division = density*math.pi*(dr**2)*(2*ring+1)/n_angles
while angle < n_angles:
for i in xrange(int(base_points_per_division)):
ra = angle*da + min(da,da*random.random())
rr = ring*dr + dr*random.random()
x = rr*math.cos(ra)
y = rr*math.sin(ra)
points.append((x,y))
angle += 1
ring += 1
Giving better results using the following parameters
R = 1.
n_rings = 10.
n_angles = 10.
density = 10/(dr*da) # ~ ten points per unit area
With a graph...
and for fun you can graph the divisions to see how well it is matching your distriubtion and adjust.
Depending on how random the points need to be, it may be simple enough to just make a grid of points within the disk, and then displace each point by some small but random amount.
It may be that you want more randomness, but if you just want to fill your disc with an even-looking distribution of points that aren't on an obvious grid, you could try a spiral with a random phase.
import math
import random
import pylab
n = 300
alpha = math.pi * (3 - math.sqrt(5)) # the "golden angle"
phase = random.random() * 2 * math.pi
points = []
for k in xrange(n):
theta = k * alpha + phase
r = math.sqrt(float(k)/n)
points.append((r * math.cos(theta), r * math.sin(theta)))
pylab.scatter(*zip(*points))
pylab.show()
Probability theory ensures that the rejection method is an appropriate method
to generate uniformly distributed points within the disk, D(0,r), centered at origin and of radius r. Namely, one generates points within the square [-r,r] x [-r,r], until a point falls within the disk:
do{
generate P in [-r,r]x[-r,r];
}while(P[0]**2+P[1]**2>r);
return P;
unif_rnd_disk is a generator function implementing this rejection method:
import matplotlib.pyplot as plt
import numpy as np
import itertools
def unif_rnd_disk(r=1.0):
pt=np.zeros(2)
while True:
yield pt
while True:
pt=-r+2*r*np.random.random(2)
if (pt[0]**2+pt[1]**2<=r):
break
G=unif_rnd_disk()# generator of points in disk D(0,r=1)
X,Y=zip(*[pt for pt in itertools.islice(G, 1, 1000)])
plt.scatter(X, Y, color='r', s=3)
plt.axis('equal')
If we want to generate points in a disk centered at C(a,b), we have to apply a translation to the points in the disk D(0,r):
C=[2.0, -3.5]
plt.scatter(C[0]+np.array(X), C[1]+np.array(Y), color='r', s=3)
plt.axis('equal')

Categories

Resources