Ising Model in Python - python

I'm currently working on writing code for the Ising Model using Python3. I'm still pretty new to coding. I have working code, but the output result is not as expected and I can't seem to find the error. Here is my code:
import numpy as np
import random
def init_spin_array(rows, cols):
return np.random.choice((-1, 1), size=(rows, cols))
def find_neighbors(spin_array, lattice, x, y):
left = (x , y - 1)
right = (x, y + 1 if y + 1 < (lattice - 1) else 0)
top = (x - 1, y)
bottom = (x + 1 if x + 1 < (lattice - 1) else 0, y)
return [spin_array[left[0], left[1]],
spin_array[right[0], right[1]],
spin_array[top[0], top[1]],
spin_array[bottom[0], bottom[1]]]
def energy(spin_array, lattice, x ,y):
return -1 * spin_array[x, y] * sum(find_neighbors(spin_array, lattice, x, y))
def main():
lattice = eval(input("Enter lattice size: "))
temperature = eval(input("Enter the temperature: "))
sweeps = eval(input("Enter the number of Monte Carlo Sweeps: "))
spin_array = init_spin_array(lattice, lattice)
print("Original System: \n", spin_array)
# the Monte Carlo follows below
for sweep in range(sweeps):
for i in range(lattice):
for j in range(lattice):
e = energy(spin_array, lattice, i, j)
if e <= 0:
spin_array[i, j] *= -1
elif np.exp(-1 * e/temperature) > random.randint(0, 1):
spin_array[i, j] *= -1
else:
continue
print("Modified System: \n", spin_array)
main()
I think the error is in the Monte Carlo Loop, but I am not sure. The system should be highly ordered at low temperatures and become disordered past the critical temperature of 2.27. In other words, the randomness of the system should increase as T approaches 2.27. For example, at T=.1, we should see large patches of spins that are aligned, i.e. patches of -1s and 1s. Past 2.27 the system should be disordered and we should not see these patches.

Your question would make much more sense if you were to include the system size, the number of sweeps, and the average manetisation. How many of the intermediate configurations are ordered and how many disordered? MC is a sampling technique - individual configurations mean nothing and there might (and will) be disordered states at low temperature and ordered states at high T. It is the assembly properties (the average magnetisation) that is meaningful.
Anyway, there are three errors in your code: a small one, a medium one, and a really severe one.
The small one is that you are ignoring an entire row and an entire column while searching for neighbours in find_neighbors:
right = (x, y + 1 if y + 1 < (lattice - 1) else 0)
should be:
right = (x, y + 1 if y + 1 < lattice else 0)
or even better:
right = (x, (y + 1) % lattice)
Same applies to bottom.
The medium one is that your computation of the energy difference is off by a factor of two:
def energy(spin_array, lattice, x ,y):
return -1 * spin_array[x, y] * sum(find_neighbors(spin_array, lattice, x, y))
^^
The factor is actually 2*J, where J is the coupling constant, therefore having -1 there means:
your critical temperature is halved, and more importantly...
you have antiferromagnetic spin interaction (J < 0), so no ordered states for you even at very low temperatures.
The worst mistake however is the use of random.randint() for the rejection sampling:
elif np.exp(-1 * e/temperature) > random.randint(0, 1):
spin_array[i, j] *= -1
You should be using random.random() instead, otherwise the transition probability will always be 50%.
Here is a modification of your program that automatically sweeps over the temperature region from 0.1 to 5.0:
import numpy as np
import random
def init_spin_array(rows, cols):
return np.ones((rows, cols))
def find_neighbors(spin_array, lattice, x, y):
left = (x, y - 1)
right = (x, (y + 1) % lattice)
top = (x - 1, y)
bottom = ((x + 1) % lattice, y)
return [spin_array[left[0], left[1]],
spin_array[right[0], right[1]],
spin_array[top[0], top[1]],
spin_array[bottom[0], bottom[1]]]
def energy(spin_array, lattice, x ,y):
return 2 * spin_array[x, y] * sum(find_neighbors(spin_array, lattice, x, y))
def main():
RELAX_SWEEPS = 50
lattice = eval(input("Enter lattice size: "))
sweeps = eval(input("Enter the number of Monte Carlo Sweeps: "))
for temperature in np.arange(0.1, 5.0, 0.1):
spin_array = init_spin_array(lattice, lattice)
# the Monte Carlo follows below
mag = np.zeros(sweeps + RELAX_SWEEPS)
for sweep in range(sweeps + RELAX_SWEEPS):
for i in range(lattice):
for j in range(lattice):
e = energy(spin_array, lattice, i, j)
if e <= 0:
spin_array[i, j] *= -1
elif np.exp((-1.0 * e)/temperature) > random.random():
spin_array[i, j] *= -1
mag[sweep] = abs(sum(sum(spin_array))) / (lattice ** 2)
print(temperature, sum(mag[RELAX_SWEEPS:]) / sweeps)
main()
And the result for 20x20 and 100x100 lattices and 100 sweeps:
The starting configuration is a completely ordered one to prevent the development of domain walls that are very stable at low temperatures. Also, 30 additional sweeps are performed initially in order to thermalise the system (not nearly enough when close to the critical temperature, but the Metropolis-Hastings algorithm cannot properly handle the critical slowdown there anyway).

Related

Efficient function mapping with arguments in numpy

I am trying to create a heightmap by interpolating between a bunch of heights at certain points in an area. To process the whole image, I have the following code snippet:
map_ = np.zeros((img_width, img_height))
for x in range(img_width):
for y in range(img_height):
map_[x, y] = calculate_height(set(points.items()), x, y)
This is calculate_height:
def distance(x1, y1, x2, y2) -> float:
return np.sqrt((x1 - x2) ** 2 + (y1 - y2) ** 2)
def calculate_height(points: set, x, y) -> float:
total = 0
dists = {}
for pos, h in points:
d = distance(pos[0], pos[1], x, y)
if x == pos[0] and y == pos[1]:
return h
d = 1 / (d ** 2)
dists[pos] = d
total += d
r = 0
for pos, h in points:
ratio = dists[pos] / total
r += ratio * h
r = r
return r
This snippet works perfectly, but if the image is too big, it takes a long time to process, because this is O(n^2). The problem with this, is that a "too big" image is 800 600, and it takes almost a minute to process, which to me seems a bit excessive.
My goal is not to reduce the time complexity from O(n^2), but to reduce the time it takes to process images, so that I can get decently sized images in a reasonable amount of time.
I found
this post, but I couldn't really try it out because it's all for a 1D array, and I have a 2D array, and I also need to pass the coordinates of each point and the set of existing points to the calculate_height function. What can I try to optimize this code snippet?
Edit: Moving set(points.items) out of the loop as #thethiny suggested was a HUGE improvement. I had no idea it was such a heavy thing to do. This makes it fast enough for me, but feel free to add more suggestions for the next people to come by!
Edit 2: I have further optimized this processing by including the following changes:
# The first for loop inside the calculate_distance function
for pos, h in points:
d2 = distance2(pos[0], pos[1], x, y)
if x == pos[0] and y == pos[1]:
return h
d2 = d2 ** -1 # 1 / (d ** 2) == d ** -2 == d2 ** -1
dists[pos] = d2 # Having the square of d on these two lines makes no difference
total += d2
This reduced execution time for a 200x200 image from 1.57 seconds to 0.76 seconds. The 800x600 image mentioned earlier now takes 6.13 seconds to process :D
This is what points looks like (as requested by #norok12):
# Hints added for easier understanding, I know this doesn't run
points: dict[tuple[int, int], float] = {
(x: int, y: int): height: float,
(x: int, y: int): height: float,
(x: int, y: int): height: float
}
# The amount of points varies between datasets, so I can't provide a useful range other than [3, inf)
There's a few problems with your implementation.
Essentially what you're implementing is approximation using radial basis functions.
The usual algorithm for that looks like:
sum_w = 0
sum_wv = 0
for p,v in points.items():
d = distance(p,x)
w = 1.0 / (d*d)
sum_w += w
sum_wv += w*v
return sum_wv / sum_w
Your code has some extra logic for bailing out if p==x - which is good.
But it also allocates an array of distances - which this single loop form does not need.
This brings execution of an example in a workbook from 13s to 12s.
The next thing to note is that collapsing the points dict into an numpy array gives us the chance to use numpy functions.
points_array = np.array([(p[0][0],p[0][1],p[1]) for p in points.items()]).astype(np.float32)
Then we can write the function as
def calculate_height_fast(points, x, y) -> float:
dx = points[:,0] - x
dy = points[:,1] - y
r = np.hypot(dx,dy)
w = 1.0 / (r*r)
sum_w = np.sum(w)
return np.sum(points[:,2] * w) / np.sum(w)
This brings our time down to 658ms. But we can do better yet...
Since we're now using numpy functions we can apply numba.njit to JIT compile our function.
#numba.njit
def calculate_height_fast(points, x, y) -> float:
dx = points[:,0] - x
dy = points[:,1] - y
r = np.hypot(dx,dy)
w = 1.0 / (r*r)
sum_w = np.sum(w)
return np.sum(points[:,2] * w) / np.sum(w)
This was giving me 105ms (after the function had been run once to ensure it got compiled).
This is a speed up of 130x over the original implementation (for my data)
You can see the full implementations here
This really is a small addition to #MichaelAnderson's detailed answer.
Probably calculate_height_fast() can get faster by optimizing a bit more with explicit looping:
#numba.njit
def calculate_height_faster(points, x, y) -> float:
dx = points[:, 0] - x
dy = points[:, 1] - y
r = np.hypot(dx, dy)
# compute weighted average
n = r.size
sum_w = sum_wp = 0
for i in range(n):
w = 1.0 / (r[i] * r[i])
sum_w += w
sum_wp += points[i, 2] * w
return sum_wp / sum_w

minimum spanning tree remove all extra edges

Extra edge- edge made with 2 points, where each point is connected with another edge.
I want to disconnect MST by deleting these edges.
What is the best approach to minimize the weight of new disconnected MST,
or in what order should I delete these edges(deleting one could affect the other)?
My approach is to delete the biggest weight extra edges first?
https://prnt.sc/1xq1msp
In this case, removing 7(CD)-> no more edges could be deleted.
But you could also remove B-C, and then remove D-E which is better solution
Here’s an exact solution with NumPy/SciPy/OR-Tools that uses a kd-tree
to enumerate a sparse subset of edges that can possibly be included in
an optimal solution, then formulates and solves a mixed integer program.
Not sure it will scale to your needs though; you could set setting a gap
limit if you’re willing to settle for an approximation.
import collections
import numpy
import scipy.spatial
from ortools.linear_solver import pywraplp
def min_edge_cover(points):
# Enumerate the candidate edges.
candidate_edges = set()
tree = scipy.spatial.KDTree(points)
min_distances = numpy.ndarray(len(points))
for i, p in enumerate(points):
if i % 1000 == 0:
print(i)
distances, indexes = tree.query(p, k=2)
# Ignore p itself.
d, j = (
(distances[1], indexes[1])
if indexes[0] == i
else (distances[0], indexes[0])
)
candidate_edges.add((min(i, j), max(i, j)))
min_distances[i] = d
for i, p in enumerate(points):
if i % 1000 == 0:
print(i)
# An edge is profitable only if it's shorter than the sum of the
# distance from each of its endpoints to that endpoint's nearest
# neighbor.
indexes = tree.query_ball_point(p, 2 * min_distances[i])
for j in indexes:
if i == j:
continue
discount = (
min_distances[i] + min_distances[j]
) - scipy.spatial.distance.euclidean(points[i], points[j])
if discount >= 0:
candidate_edges.add((min(i, j), max(i, j)))
candidate_edges = sorted(candidate_edges)
# Formulate and solve a mixed integer program to find the minimum distance
# edge cover. There's a way to do this with general weighted matching, but
# OR-Tools doesn't expose that library yet.
solver = pywraplp.Solver.CreateSolver("SCIP")
objective = 0
edge_variables = []
coverage = collections.defaultdict(lambda: 0)
for i, j in candidate_edges:
x = solver.BoolVar("x{}_{}".format(i, j))
objective += scipy.spatial.distance.euclidean(points[i], points[j]) * x
coverage[i] += x
coverage[j] += x
edge_variables.append(x)
solver.Minimize(objective)
for c in coverage.values():
solver.Add(c >= 1)
solver.EnableOutput()
assert solver.Solve() == pywraplp.Solver.OPTIMAL
return {e for (e, x) in zip(candidate_edges, edge_variables) if x.solution_value()}
def random_point():
return complex(random(), random())
def test(points, graphics=False):
cover = min_edge_cover(points)
if not graphics:
return
with open("out.ps", "w") as f:
print("%!PS", file=f)
print(0, "setlinewidth", file=f)
inch = 72
scale = 7 * inch
print((8.5 * inch - scale) / 2, (11 * inch - scale) / 2, "translate", file=f)
for x, y in points:
print(scale * x, scale * y, 1, 0, 360, "arc", "fill", file=f)
for i, j in cover:
xi, yi = points[i]
xj, yj = points[j]
print(
scale * xi,
scale * yi,
"moveto",
scale * xj,
scale * yj,
"lineto",
file=f,
)
print("stroke", file=f)
print("showpage", file=f)
test(numpy.random.rand(100, 2), graphics=True)
test(numpy.random.rand(10000, 2))

Unable to increment integers as expected

I am following a short youtube video tutorial on Monte Carlo problems with python (https://www.youtube.com/watch?v=BfS2H1y6tzQ) and the code isn't working. The goal is to see how many times I will have to take transport to get back home, considering you take transport if the distance is greater than 4.
So I assumed the issue was that every time random_walk was called, the x,y variables are being reset to zero so the distance is never always within a 0-1 range and isn't incrementing as expected.
import random
def random_walk(n):
x, y = 0, 0
for i in range(n):
(dx, dy) = random.choice([(0, 1), (0, -1), (1, 0), (-1, 0)])
x += dx
y += dy
return (x, y)
number_of_walks = 10000
no_transport = 0
for walk_length in range(1, 31):
for i in range(number_of_walks):
(x, y) = random_walk(walk_length)
distance = abs(x) + abs(y)
if distance <= 4:
no_transport += 1
transported_percentage = float(no_transport) / number_of_walks
print("Walk Size = ", walk_length, " / % transported = ", 100 * transported_percentage)
I expect results to show what % of the times I transported did I have to take transport home, instead, I get inaccurate numbers like 100, 200, 300%. Could the video tutorial have incorrect code?
You need to reset the no_transport inside the main loop, because it's cumulative over all your tests instead of for each walk length.
for walk_length in range(1, 31):
no_transport = 0
Also the percentage is calculating the number for no_transport walks, not the percentage of transport walks: This is percentage of transported.
transported_percentage = (number_of_walks - float(no_transport)) / number_of_walks

Why isn't this centered fourth-order-accurate finite differencing scheme yielding fourth-order convergence for solving pdes

I am solving the dissipation equation using a finite differencing scheme. The initial condition is a half sin wave with Dirchlet boundary conditions on both sides. I insert an extra point on each side of the domain to enforce the Dirchlet boundary condition while maintaining fourth-order-accuracy, then use forwards-euler to evolve it in time
When I switch from the second-order-accurate stencil to the fourth-order-accurate stencil /12.
I do not see an improvement in the rate of convergence when I plot vs an estimate of the error.
I wrote and commented a code that shows my problem. When I use the 5 point strategy, my rate of convergence is the same:
Why is this happening? Why isn't the fourth-order-accurate stencil helping the convergence rate? I combed over this carefully and I think that there must be some issue in my understanding.
# Let's evolve the diffusion equation in time with Dirchlet BCs
# Load modules
import numpy as np
import matplotlib.pyplot as plt
# Domain size
XF = 1
# Viscosity
nu = 0.01
# Spatial Differentiation function, approximates d^u/dx^2
def diffusive_dudt(un, nu, dx, strategy='5c'):
undiff = np.zeros(un.size, dtype=np.float128)
# O(h^2)
if strategy == '3c':
undiff[2:-2] = nu * (un[3:-1] - 2 * un[2:-2] + un[1:-3]) / dx**2
# O(h^4)
elif strategy == '5c':
undiff[2:-2] = nu * (-1 * un[4:] + 16 * un[3:-1] - 30 * un[2:-2] + 16 * un[1:-3] - un[:-4]) / (12 * dx**2 )
else: raise(IOError("Invalid diffusive strategy")) ; quit()
return undiff
def geturec(x, nu=.05, evolution_time=1, u0=None, n_save_t=50, ubl=0., ubr=0., diffstrategy='5c', dt=None, returndt=False):
dx = x[1] - x[0]
# Prescribde cfl=0.1 and ftcs=0.2
if dt is not None: pass
else: dt = min(.1 * dx / 1., .2 / nu * dx ** 2)
if returndt: return dt
nt = int(evolution_time / dt)
divider = int(nt / float(n_save_t))
if divider ==0: raise(IOError("not enough time steps to save %i times"%n_save_t))
# The default initial condition is a half sine wave.
u_initial = ubl + np.sin(x * np.pi)
if u0 is not None: u_initial = u0
u = u_initial
u[0] = ubl
u[-1] = ubr
# insert ghost cells; extra cells on the left and right
# for the edge cases of the finite difference scheme
x = np.insert(x, 0, x[0]-dx)
x = np.insert(x, -1, x[-1]+dx)
u = np.insert(u, 0, ubl)
u = np.insert(u, -1, ubr)
# u_record holds all the snapshots. They are evenly spaced in time,
# except the final and initial states
u_record = np.zeros((x.size, int(nt / divider + 2)))
# Evolve through time
ii = 1
u_record[:, 0] = u
for _ in range(nt):
un = u.copy()
dudt = diffusive_dudt(un, nu, dx, strategy=diffstrategy)
# forward euler time step
u = un + dt * dudt
# Save every xth time step
if _ % divider == 0:
#print "C # ---> ", u * dt / dx
u_record[:, ii] = u.copy()
ii += 1
u_record[:, -1] = u
return u_record[1:-1, :]
# define L-1 Norm
def ul1(u, dx): return np.sum(np.abs(u)) / u.size
# Now let's sweep through dxs to find convergence rate
# Define dxs to sweep
xrang = np.linspace(350, 400, 4)
# this function accepts a differentiation key name and returns a list of dx and L-1 points
def errf(strat):
# Lists to record dx and L-1 points
ypoints = []
dxs= []
# Establish truth value with a more-resolved grid
x = np.linspace(0, XF, 800) ; dx = x[1] - x[0]
# Record truth L-1 and dt associated with finest "truth" grid
trueu = geturec(nu=nu, x=x, diffstrategy=strat, evolution_time=2, n_save_t=20, ubl=0, ubr=0)
truedt = geturec(nu=nu, x=x, diffstrategy=strat, evolution_time=2, n_save_t=20, ubl=0, ubr=0, returndt=True)
trueqoi = ul1(trueu[:, -1], dx)
# Sweep dxs
for nx in xrang:
x = np.linspace(0, XF, nx) ; dx = x[1] - x[0]
dxs.append(dx)
# Run solver, hold dt fixed
u = geturec(nu=nu, x=x, diffstrategy='5c', evolution_time=2, n_save_t=20, ubl=0, ubr=0, dt=truedt)
# record |L-1(dx) - L-1(truth)|
qoi = ul1(u[:, -1], dx)
ypoints.append(np.abs(trueqoi - qoi))
return dxs, ypoints
# Plot results. The fourth order method should have a slope of 4 on the log-log plot.
from scipy.optimize import minimize as mini
strategy = '5c'
dxs, ypoints = errf(strategy)
def fit2(a): return 1000 * np.sum((a * np.array(dxs) ** 2 - ypoints) ** 2)
def fit4(a): return 1000 * np.sum((np.exp(a) * np.array(dxs) ** 4 - ypoints) ** 2)
a = mini(fit2, 500).x
b = mini(fit4, 11).x
plt.plot(dxs, a * np.array(dxs)**2, c='k', label=r"$\nu^2$", ls='--')
plt.plot(dxs, np.exp(b) * np.array(dxs)**4, c='k', label=r"$\nu^4$")
plt.plot(dxs, ypoints, label=r"Convergence", marker='x')
plt.yscale('log')
plt.xscale('log')
plt.xlabel(r"$\Delta X$")
plt.ylabel("$L-L_{true}$")
plt.title(r"$\nu=%f, strategy=%s$"%(nu, strategy))
plt.legend()
plt.savefig('/Users/kilojoules/Downloads/%s.pdf'%strategy, bbox_inches='tight')
The error of the scheme is O(dt,dx²) resp. O(dt, dx⁴). As you keep dt=O(dx^2), the combined error is O(dx²) in both cases. You could try to scale dt=O(dx⁴) in the second case, however the balance of truncation and floating point error of the Euler or any first order method is reached around L*dt=1e-8, where L is a Lipschitz constant for the right side, so higher for more complex right sides. Even in the best case, going beyond dx=0.01 would be futile. Using a higher order method in time direction should help.
You used the wrong error metric. If you compare the fields on a point-by-point basis you'll get the convergence rate you were after.

How to run my python code in Cython?

I have currently made quite a large code in python and when I run it, it takes about 3 minutes for it to make the full calculation. Eventually, I want to increase my N to about 400 and change my m in the for loop to an even larger number - This would probably take hours to calculate which I want to cut down.
It's steps 1-6 that take a long time.
When attempting to run this with cython (I.E. importing pyximport then importing my file)
I get the following error FDC.pyx:49:19: 'range' not a valid cython language construct and
FDC.pyx:49:19: 'range' not a valid cython attribute or is being used incorrectly
from physics import *
from operator import add, sub
import pylab
################ PRODUCING CHARGES AT RANDOM IN r #############
N=11 #Number of point charges
x = zeros(N,float) #grid
y = zeros(N,float)
i=0
while i < N: #code to produce values of x and y within r
x[i] = random.uniform(0,1)
y[i] = random.uniform(0,1)
if x[i] ** 2 + y[i] ** 2 <= 1:
i+=1
print x, y
def r(x,y): #distance between particles
return sqrt(x**2 + y**2)
o = 0; k = 0; W=0 #sum of energy for initial charges
for o in range(0, N):
for k in range(0, N):
if o==k:
continue
xdist=x[o]-x[k]
ydist=y[o]-y[k]
W+= 0.5/(r(xdist,ydist))
print "Initial Energy:", W
##################### STEPS 1-6 ######################
d=0.01 #fixed change in length
charge=(x,y)
l=0; m=0; n=0
prevsW = 0.
T=100
for q in range(0,100):
T=0.9*T
for m in range(0, 4000): #steps 1 - 6 in notes looped over
xRef = random.randint(0,1) #Choosing x or y
yRef = random.randint(0,N-1) #choosing the element of xRef
j = charge[xRef][yRef] #Chooses specific axis of a charge and stores it as 'j'
prevops = None #assigns prevops as having no variable
while True: #code to randomly change charge positions and ensure they do not leave the disc
ops =(add, sub); op=random.choice(ops)
tempJ = op(j, d)
#print xRef, yRef, n, tempJ
charge[xRef][yRef] = tempJ
ret = r(charge[0][yRef],charge[1][yRef])
if ret<=1.0:
j=tempJ
#print "working", n
break
elif prevops != ops and prevops != None: #!= is 'not equal to' so that if both addition and subtraction operations dont work the code breaks
break
prevops = ops #####
o = 0; k = 0; sW=0 #New energy with altered x coordinate
for o in range(0, N):
for k in range(0, N):
if o==k:
continue
xdist = x[o] - x[k]
ydist = y[o] - y[k]
sW+=0.5/(r( xdist , ydist ))
difference = sW - prevsW
prevsW = sW
#Conditions:
p=0
if difference < 0: #accept change
charge[xRef][yRef] = j
#print 'step 5'
randomnum = random.uniform(0,1) #r
if difference > 0: #acceptance with a probability
p = exp( -difference / T )
#print 'step 6', p
if randomnum >= p:
charge[xRef][yRef] = op(tempJ, -d) #revert coordinate to original if r>p
#print charge[xRef][yRef], 'r>p'
#print m, charge, difference
o = 0; k = 0; DW=0 #sum of energy for initial charges
for o in range(0, N):
for k in range(0, N):
if o==k:
continue
xdist=x[o]-x[k]
ydist=y[o]-y[k]
DW+= 0.5/(r(xdist,ydist))
print charge
print 'Final Energy:', DW
################### plotting circle ###################
# use radians instead of degrees
list_radians = [0]
for i in range(0,360):
float_div = 180.0/(i+1)
list_radians.append(pi/float_div)
# list of coordinates for each point
list_x2_axis = []
list_y2_axis = []
# calculate coordinates
# and append to above list
for a in list_radians:
list_x2_axis.append(cos(a))
list_y2_axis.append(sin(a))
# plot the coordinates
pylab.plot(list_x2_axis,list_y2_axis,c='r')
########################################################
pylab.title('Distribution of Charges on a Disc')
pylab.scatter(x,y)
pylab.show()
What is taking time seems to be this:
for q in range(0,100):
...
for m in range(0, 4000): #steps 1 - 6 in notes looped over
while True: #code to randomly change charge positions and ensure they do not leave the disc
....
for o in range(0, N): # <----- N will be brought up to 400
for k in range(0, N):
....
....
....
....
100 x 4000 x (while loop) + 100 x 4000 x 400 x 400 = [400,000 x while loop] + [64,000,000,000]
Before looking into a faster language, maybe there is a better way to build your simulation?
Other than that, you will likely have immediate performance gains if you:
- shift to numpy arrays i/o python lists.
- Use xrange i/o range
[edit to try to answer question in the comments]:
import numpy as np, random
N=11 #Number of point charges
x = np.random.uniform(0,1,N)
y = np.random.uniform(0,1,N)
z = np.zeros(N)
z = np.sqrt(x**2 + y**2) # <--- this could maybe replace r(x,y) (called quite often in your code)
print x, y, z
You also could look into all the variables that are assigned or recalculated many many times inside your main loop (the one described above), and pull all that outside the loop(s) so it is not repeatedly assigned or recalculated.
for instance,
ops =(add, sub); op=random.choice(ops)
maybe could be replaced by
ops = random.choice(add, sub)
Lastly, and here I am out on a limb because I've never used it myself, but it might be a little bit simpler for you to use a package like Numba or Jit as opposed to cython; they allow you to decorate a critical part of your code and have it precompiled prior to execution, with none or very minor changes.

Categories

Resources