How to change Function by without Changing its Parameters - python

I am new to python and in learning stages. I wanted to implement Particle Swarm Optimization(PSO) algorithm which I did by taking help from on-line materials and python tutorials. In PSO, a simple calculus problem is inferred i-e 100 * ((y - (x2))2) + ((1 - (x2))2). This problem is defined in a fitness function.
def fitness(x, y):
return 100 * ((y - (x**2))**2) + ((1 - (x**2))**2)
Now, I want to replace this simple calculus problem by simple first order Ordinary Differential Equation(ODE) by without changing existing function parameters (x,y) and want to return the value of dy_dx,y0 and t for further process.
# Define a function which calculates the derivative
def dy_dx(y, x):
return x - y
t = np.linspace(0,5,100)
y0 = 1.0 # the initial condition
ys = odeint(dy_dx, y0, t)`
In python odeint function is used for ODE which requires three essential parameters i-e func/model, y0( Initial condition on y (can be a vector) and t(A sequence of time points for which to solve for y) Example of odeint parameters.
I don't want to change its parameters because it will be difficult for me to make changes in algorithm.
For simplicity I pasted the full code below and my question is open to anyone if wants to modify the code with further parameters in General Best, Personal Best and r[i].
import numpy as np
from scipy.integrate import odeint
import random as rand
from scipy.integrate import odeint
from numpy import array
import matplotlib.pyplot as plt
def main():
#Variables
n = 40
num_variables = 2
a = np.empty((num_variables, n))
v = np.empty((num_variables, n))
Pbest = np.empty((num_variables, n))
Gbest = np.empty((1, 2))
r = np.empty((n))
for i in range(0, num_variables):
for j in range(0, n):
Pbest[i][j] = rand.randint(-20, 20)
a[i][j] = Pbest[i][j]
v[i][j] = 0
for i in range(0, n):
r[i] = fitness(a[0][i], a[1][i])
#Sort elements of Pbest
Order(Pbest, r, n)
Gbest[0][0] = Pbest[0][0]
Gbest[0][1] = Pbest[1][0]
generation = 0
plt.ion()
fig = plt.figure()
ax = fig.add_subplot(111)
ax.grid(True)
while(generation < 1000):
for i in range(n):
#Get Personal Best
if(fitness(a[0][i], a[1][i]) < fitness(Pbest[0][i], Pbest[1][i])):
Pbest[0][i] = a[0][i]
Pbest[1][i] = a[1][i]
#Get General Best
if(fitness(Pbest[0][i], Pbest[1][i]) < fitness(Gbest[0][0], Gbest[0][1])):
Gbest[0][0] = Pbest[0][i]
Gbest[0][1] = Pbest[1][i]
#Calculate Velocity
Vector_Velocidad(n, a, Pbest, Gbest, v)
generation = generation + 1
print 'Generacion: ' + str(generation) + ' - - - Gbest: ' +str(Gbest)
line1 = ax.plot(a[0], a[1], 'r+')
line2 = ax.plot(Gbest[0][0], Gbest[0][1], 'g*')
ax.set_xlim(-10, 10)
ax.set_ylim(-10, 10)
fig.canvas.draw()
ax.clear()
ax.grid(True)
print 'Gbest: '
print Gbest
def Vector_Velocidad(n, a, Pbest, Gbest, v):
for i in range(n):
#Velocity in X
v[0][i] = 0.7 * v[0][i] + (Pbest[0][i] - a[0][i]) * rand.random() * 1.47 + (Gbest[0][0] - a[0][i]) * rand.random() * 1.47
a[0][i] = a[0][i] + v[0][i]
v[1][i] = 0.7 * v[1][i] + (Pbest[1][i] - a[1][i]) * rand.random() * 1.47 + (Gbest[0][1] - a[1][i]) * rand.random() * 1.47
a[1][i] = a[1][i] + v[1][i]
def fitness(x, y):
return 100 * ((y - (x**2))**2) + ((1 - (x**2))**2)
def Order(Pbest, r, n):
for i in range(1, n):
for j in range(0, n - 1):
if r[j] > r[j + 1]:
#Order the fitness
tempRes = r[j]
r[j] = r[j + 1]
r[j + 1] = tempRes
#Order las X, Y
tempX = Pbest[0][j]
Pbest[0][j] = Pbest[0][j + 1]
Pbest[0][j + 1] = tempX
tempY = Pbest[1][j]
Pbest[1][j] = Pbest[1][j + 1]
Pbest[1][j + 1] = tempY
if '__main__' == main():
main()

Related

Gradient descent in matlab work but in python not work

Matlab version
For the contour plotting
[x1,x2\] = meshgrid(-30:0.5:30, -30:0.5:30);
F = (x1-2).^2 + 2\*(x2 - 3).^2;
figure;
surf(x1,x2,F);
hold on;
contour(x1,x2,F);
figure;
contour(x1,x2,F,20);
hold on;
For initialize the value of the matrix and vector
A = [1 0; 0 2];
AT = A';
b = [4; 12];
Nit = 100; % no of iteration of our GD
tol = 1e-5; % error tolerance
lr = 0.2; % learning rate
xk = [-20;-20\]; % initial x value
noIterations = 1;
gradErr = [];
The looping for the gradient descent
for k =1:Nit
x_old = xk;
xk = xk - lr*AT*(A*xk - b); % Main GD step
gradErr(k) = norm(AT*(A*xk-b),'fro');
if gradErr(k) < tol
break;
end
plot([x_old(1) xk(1)],[x_old(2) xk(2)],'ko-')
noIterations = noIterations + 1;
end
Python version
Contour plotting part
import numpy as np
import matplotlib.pyplot as plt
x1,x2 = np.meshgrid(np.arange(- 30,30+0.5,0.5),np.arange(- 30,30+0.5,0.5))
F = (x1 - 2) ** 2 + 2 * (x2 - 3) ** 2
fig=plt.figure()
surf=fig.gca(projection='3d')
surf.plot_surface(x1,x2,F)
surf.contour(x1,x2,F)
plt.show()
fig,surf=plt.subplots()
plt.contour(x1,x2,F,20)
plt.show()
Initialize the value of the matrix and vector
A = np.array([[1,0],[0,2]])
AT = np.transpose(A)
b = np.array([[4],[12]])
Nit = 100
tol = 1e-05
lr = 0.2
xk = np.array([[-10],[-10]])
noIterations = 1
gradErr = []
Main problem is here where the looping has the bug cause it cant run the coding
for k in range(Nit):
x_old = xk
xk = xk - lr*np.matmul(AT,np.matmul(A,xk - b))
gradErr[k] = np.linalg.norm(AT * (A * xk - b),'fro')
if gradErr[k] < tol:
break
plt.plot(np.array([x_old(1),xk(1)]),np.array([x_old(2),xk(2)]),'ko-')
noIterations = noIterations + 1
May I know what is the problem for my python version in the looping part cant work but in matlab version is work well?
To access k-th element of gradErr, it has to be pre-assign a positive length. In your case, it is initialized as an empty list, which is the cause of IndexError. A simple fix is to use gradErr=np.zeros(Nit) Full code after making proper modification is the following:
import numpy as np
import matplotlib.pyplot as plt
x1,x2 = np.meshgrid(np.arange(-30, 30+0.5, 0.5), np.arange(-30, 30+0.5, 0.5))
F = (x1 - 2) ** 2 + 2 * (x2 - 3) ** 2
fig=plt.figure()
surf = fig.add_subplot(1, 1, 1, projection='3d')
surf.plot_surface(x1,x2,F)
surf.contour(x1,x2,F)
plt.show()
fig, surf=plt.subplots()
plt.contour(x1, x2, F, 20)
plt.show()
A = np.array([[1,0], [0,2]])
AT = np.transpose(A)
b = np.array([[4], [12]])
Nit = 100
tol = 1e-05
lr = 0.2
xk = np.array([[-10], [-10]])
noIterations = 1
gradErr = np.zeros(Nit)
for k in range(Nit):
x_old = xk
xk = xk - lr * np.matmul(AT, np.matmul(A, xk - b))
gradErr[k] = np.linalg.norm(AT * (A * xk - b),'fro')
if gradErr[k] < tol:
break
plt.plot(np.array([x_old[0], xk[0]]),np.array([x_old[1], xk[1]]),'ko-')
noIterations = noIterations + 1

Compute Fourier Series for a discrete set of points

I'm trying to compute the continuous function hidden behind the points, but it shows a graph that looks like it actually coutns the points in-between as zeros.
Here's the plot that shows up (100 vectors, red dots - data set, blue plot - my Fourier series):
Here's the python code:
import matplotlib.pyplot as plt
import numpy as np
import math
step = (np.pi * 2) / 5
start = -np.pi
xDiscrete = [start, start + step, start + 2 * step, start + 3 * step, start + 4 * step, np.pi]
yDiscrete = [2.88, 2.98, 3.24, 3.42, 3.57, 3.79]
ak = []
bk = []
a0 = 0
precisionSize = 0.001
n = 100
avgError = 0
def getAN(k):
sum = 0
for ind in range(1, len(yDiscrete)):
sum += yDiscrete[ind] * math.cos(k * xDiscrete[ind])
an = (2.0 / n) * sum
print('a' + str(k) + ' = ' + str(an))
return an
def getBN(k):
sum = 0
for ind in range(1, len(yDiscrete)):
sum += yDiscrete[ind] * math.sin(k * xDiscrete[ind])
bn = (2.0 / n) * sum
print('b' + str(k) + ' = ' + str(bn))
return bn
def getA0():
sum = 0
for ind in range(1, len(yDiscrete)):
sum += yDiscrete[ind]
a0 = (2.0 / n) * sum
print('a0 = ' + str(a0))
return a0
def getFourierOneSum(x, i):
return ak[i - 1] * math.cos(i * x) + bk[i - 1] * math.sin(i * x)
def getFourierAtPoint(x):
sum = a0 / 2
for i in range(1, n + 1):
sum += getFourierOneSum(x, i)
return sum
for i in range(1, n + 1):
ak.append(getAN(i))
bk.append(getBN(i))
a0 = getA0()
x2 = np.arange(-np.pi, np.pi, precisionSize)
y2 = []
for coor in x2:
y2.append(getFourierAtPoint(coor))
plt.plot(xDiscrete, yDiscrete, 'ro', alpha=0.6)
plt.plot(x2, y2)
plt.grid()
plt.title('Approximation')
plt.show()
I've checked where is the problem, and I'm pretty sure it's with the coefficients (functions getAN, getBN, getA0), but I'm not sure how to fix it.

How to fix for loop for a list

I am trying to create y as an array to create a function iterating through zeta which is dependent upon E all using a for loop. However the values are not being added to the list.
I have also tried defining the variables and the mathematical function as two different coding functions
screenshot
import cmath
import matplotlib.pyplot as plt
a = 2*10**-15
Vo = 83*10**6
m = 1.6726*10**(-27)
pi = cmath.pi
E = []
E.append(-83*10**6)
hbar = 6.62607015*10**(-34)/ pi
K = 16.032280*10**6
y = []
y.append(51311.18131)
def variables(y, E):
for i in range(1, 83, 1):
alpha = cmath.sqrt(2*m*(E[i-1]+Vo)/(hbar**2))
zeta = alpha*a
eta = cmath.sqrt(k - zeta**2)
y[i] = zeta*cmath.tan(zeta) - eta
E[i] = E[i-1] + 1
return y, E
print('E = ', E, 'Y = ', y)
plt.plot(E, y)
The program as of now should graph y values as a function of Zeta which is changing with energy.
You don't need the loop to be in a function, just put it at the top-level of the script. And use y.append() and E.append() to add to those lists.
for i in range(1, 83):
alpha = cmath.sqrt(2*m*(E[i-1]+Vo)/(hbar**2))
zeta = alpha*a
eta = cmath.sqrt(k - zeta**2)
y.append(zeta*cmath.tan(zeta) - eta)
E.append(E[i-1] + 1)
In addition to #Barmar's answer your k variable needs to be K (upper case).
import cmath
import matplotlib.pyplot as plt
a = 2*10**-15
Vo = 83*10**6
m = 1.6726*10**(-27)
pi = cmath.pi
E = [0] * 83
E.append(-83*10**6)
hbar = 6.62607015*10**(-34)/ pi
K = 16.032280*10**6
y = [0] * 83
y.append(51311.18131)
for i in range(1, 83, 1):
alpha = cmath.sqrt(2*m*(E[i-1]+Vo)/(hbar**2))
zeta = alpha*a
eta = cmath.sqrt(K - zeta**2)
y[i] = zeta*cmath.tan(zeta) - eta
E[i] = E[i-1] + 1
print('E = ', E, 'Y = ', y)
plt.plot(E, y)
Also it is not required to use append as sometimes append doesn't always work well with calculated index lookups. It might be better for you to initialize the y and E lists to be the length of your loop first.

Correct implementation of SI, SIS, SIR models (python)

I have created some very basic implementations of the mentioned models. However, although graphs seem to look right, the numbers don't add up to a constant. That is for the sum of susceptible/infected/recovered people in each compartment should add up to N (which is total number of people), but it doesn't, for some reason it adds up to some bizarre decimal numbers, and I really don't know how to fix it, after looking at it for 3 days now.
The SI Model:
import matplotlib.pyplot as plt
N = 1000000
S = N - 1
I = 1
beta = 0.6
sus = [] # infected compartment
inf = [] # susceptible compartment
prob = [] # probability of infection at time t
def infection(S, I, N):
t = 0
while (t < 100):
S = S - beta * ((S * I / N))
I = I + beta * ((S * I) / N)
p = beta * (I / N)
sus.append(S)
inf.append(I)
prob.append(p)
t = t + 1
infection(S, I, N)
figure = plt.figure()
figure.canvas.set_window_title('SI model')
figure.add_subplot(211)
inf_line, =plt.plot(inf, label='I(t)')
sus_line, = plt.plot(sus, label='S(t)')
plt.legend(handles=[inf_line, sus_line])
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0)) # use scientific notation
ax = figure.add_subplot(212)
prob_line = plt.plot(prob, label='p(t)')
plt.legend(handles=prob_line)
type(ax) # matplotlib.axes._subplots.AxesSubplot
# manipulate
vals = ax.get_yticks()
ax.set_yticklabels(['{:3.2f}%'.format(x*100) for x in vals])
plt.xlabel('T')
plt.ylabel('p')
plt.show()
SIS Model:
import matplotlib.pylab as plt
N = 1000000
S = N - 1
I = 1
beta = 0.3
gamma = 0.1
sus = \[\]
inf = \[\]
def infection(S, I, N):
for t in range (0, 1000):
S = S - (beta*S*I/N) + gamma * I
I = I + (beta*S*I/N) - gamma * I
sus.append(S)
inf.append(I)
infection(S, I, N)
figure = plt.figure()
figure.canvas.set_window_title('SIS model')
inf_line, =plt.plot(inf, label='I(t)')
sus_line, = plt.plot(sus, label='S(t)')
plt.legend(handles=\[inf_line, sus_line\])
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
plt.xlabel('T')
plt.ylabel('N')
plt.show()
SIR Model:
import matplotlib.pylab as plt
N = 1000000
S = N - 1
I = 1
R = 0
beta = 0.5
mu = 0.1
sus = []
inf = []
rec = []
def infection(S, I, R, N):
for t in range (1, 100):
S = S -(beta * S * I)/N
I = I + ((beta * S * I)/N) - R
R = mu * I
sus.append(S)
inf.append(I)
rec.append(R)
infection(S, I, R, N)
figure = plt.figure()
figure.canvas.set_window_title('SIR model')
inf_line, =plt.plot(inf, label='I(t)')
sus_line, = plt.plot(sus, label='S(t)')
rec_line, = plt.plot(rec, label='R(t)')
plt.legend(handles=[inf_line, sus_line, rec_line])
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
plt.xlabel('T')
plt.ylabel('N')
plt.show()
I'll look only at the SI model.
Your two key variables are S and I. (You may have reversed the meanings of these two variables, though that does not affect what I write here.) You initialize them so their sum is N which is the constant 1000000.
You update your two key variables in the lines
S = S - beta * ((S * I / N))
I = I + beta * ((S * I) / N)
You apparently intend to add to I and subtract from S the same value, so the sum of S and I is unchanged. However, you actually first change S then use that new value to change I, so the values added and subtracted are not actually the same, and the sum of the variables has not remained constant.
You can fix this by using Python's ability to update multiple variables in one line. Replace those two lines with
S, I = S - beta * ((S * I / N)), I + beta * ((S * I) / N)
This calculates both of the new values before updating the variables, so the same value actually added and subtracted from the two variables. (There are other ways to get the same effect, such as temporary variables for the updated values, or one temporary variable to store the amount to add and subtract, but since you use Python you may as well use its capabilities.)
When I now run the program, I get these graphs:
which I think is what you want.
So the solution above worked for the SIS model as well.
As for the SIR model I had to solve differential equations using odeint, here is a simple solution to the SIR model:
import matplotlib.pylab as plt
from scipy.integrate import odeint
import numpy as np
N = 1000
S = N - 1
I = 1
R = 0
beta = 0.6 # infection rate
gamma = 0.2 # recovery rate
# differential equatinons
def diff(sir, t):
# sir[0] - S, sir[1] - I, sir[2] - R
dsdt = - (beta * sir[0] * sir[1])/N
didt = (beta * sir[0] * sir[1])/N - gamma * sir[1]
drdt = gamma * sir[1]
print (dsdt + didt + drdt)
dsirdt = [dsdt, didt, drdt]
return dsirdt
# initial conditions
sir0 = (S, I, R)
# time points
t = np.linspace(0, 100)
# solve ODE
# the parameters are, the equations, initial conditions,
# and time steps (between 0 and 100)
sir = odeint(diff, sir0, t)
plt.plot(t, sir[:, 0], label='S(t)')
plt.plot(t, sir[:, 1], label='I(t)')
plt.plot(t, sir[:, 2], label='R(t)')
plt.legend()
plt.xlabel('T')
plt.ylabel('N')
# use scientific notation
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
plt.show()

Replacing multiprocessing pool.map with mpi4py

I'm a beginner in using MPI, and I'm still going through the documentation. However, there's very little to work on when it comes to mpi4py. I have written a code that currently uses the multiprocessing module to run on many cores, but I need replace this with mpi4py so that I can use more than one node to run my code. My code is below, when using the multiprocessing module, and also without.
With multiprocessing,
import numpy as np
import multiprocessing
start_time = time.time()
E = 0.1
M = 5
n = 1000
G = 1
c = 1
stretch = [10, 1]
#Point-Distribution Generator Function
def CDF_inv(x, e, m):
A = 1/(1 + np.log(m/e))
if x == 1:
return m
elif 0 <= x <= A:
return e * x / A
elif A < x < 1:
return e * np.exp((x / A) - 1)
#Elliptical point distribution Generator Function
def get_coor_ellip(dist=CDF_inv, params=[E, M], stretch=stretch):
R = dist(random.random(), *params)
theta = random.random() * 2 * np.pi
return (R * np.cos(theta) * stretch[0], R * np.sin(theta) * stretch[1])
def get_dist_sq(x_array, y_array):
return x_array**2 + y_array**2
#Function to obtain alpha
def get_alpha(args):
zeta_list_part, M_list_part, X, Y = args
alpha_x = 0
alpha_y = 0
for key in range(len(M_list_part)):
z_m_z_x = X - zeta_list_part[key][0]
z_m_z_y = Y - zeta_list_part[key][1]
dist_z_m_z = get_dist_sq(z_m_z_x, z_m_z_y)
alpha_x += M_list_part[key] * z_m_z_x / dist_z_m_z
alpha_y += M_list_part[key] * z_m_z_y / dist_z_m_z
return (alpha_x, alpha_y)
#The part of the process containing the loop that needs to be parallelised, where I use pool.map()
if __name__ == '__main__':
# n processes, scale accordingly
num_processes = 10
pool = multiprocessing.Pool(processes=num_processes)
random_sample = [CDF_inv(x, E, M)
for x in [random.random() for e in range(n)]]
zeta_list = [get_coor_ellip() for e in range(n)]
x1, y1 = zip(*zeta_list)
zeta_list = np.column_stack((np.array(x1), np.array(y1)))
x = np.linspace(-3, 3, 100)
y = np.linspace(-3, 3, 100)
X, Y = np.meshgrid(x, y)
print len(x)*len(y)*n,'calculations to be carried out.'
M_list = np.array([.001 for i in range(n)])
# split zeta_list, M_list, X, and Y
zeta_list_split = np.array_split(zeta_list, num_processes, axis=0)
M_list_split = np.array_split(M_list, num_processes)
X_list = [X for e in range(num_processes)]
Y_list = [Y for e in range(num_processes)]
alpha_list = pool.map(
get_alpha, zip(zeta_list_split, M_list_split, X_list, Y_list))
alpha_x = 0
alpha_y = 0
for e in alpha_list:
alpha_x += e[0] * 4 * G / (c**2)
alpha_y += e[1] * 4 * G / (c**2)
print("%f seconds" % (time.time() - start_time))
Without multiprocessing,
import numpy as np
E = 0.1
M = 5
G = 1
c = 1
M_list = [.1 for i in range(n)]
#Point-Distribution Generator Function
def CDF_inv(x, e, m):
A = 1/(1 + np.log(m/e))
if x == 1:
return m
elif 0 <= x <= A:
return e * x / A
elif A < x < 1:
return e * np.exp((x / A) - 1)
n = 1000
random_sample = [CDF_inv(x, E, M)
for x in [random.random() for e in range(n)]]
stretch = [5, 2]
#Elliptical point distribution Generator Function
def get_coor_ellip(dist=CDF_inv, params=[E, M], stretch=stretch):
R = dist(random.random(), *params)
theta = random.random() * 2 * np.pi
return (R * np.cos(theta) * stretch[0], R * np.sin(theta) * stretch[1])
#zeta_list is the list of coordinates of a distribution of points
zeta_list = [get_coor_ellip() for e in range(n)]
x1, y1 = zip(*zeta_list)
zeta_list = np.column_stack((np.array(x1), np.array(y1)))
#Creation of a X-Y Grid
x = np.linspace(-3, 3, 100)
y = np.linspace(-3, 3, 100)
X, Y = np.meshgrid(x, y)
def get_dist_sq(x_array, y_array):
return x_array**2 + y_array**2
#Calculation of alpha, containing the loop that needs to be parallelised.
alpha_x = 0
alpha_y = 0
for key in range(len(M_list)):
z_m_z_x = X - zeta_list[key][0]
z_m_z_y = Y - zeta_list[key][1]
dist_z_m_z = get_dist_sq(z_m_z_x, z_m_z_y)
alpha_x += M_list[key] * z_m_z_x / dist_z_m_z
alpha_y += M_list[key] * z_m_z_y / dist_z_m_z
alpha_x *= 4 * G / (c**2)
alpha_y *= 4 * G / (c**2)
Basically what my code does is, it first generates a list of points that follow a certain distribution. Then I apply an equation to obtain the quantity 'alpha' using different relations between the distances of the points. The part that requires parallelisation is the single for loop involved in the calculation of alpha. What I want to do is to use mpi4py instead of multiprocessing to do this, and I am not sure how to get this going.
Transforming the multiprocessing.map version to MPI can be done using scatter / gather. In your case it is useful, that you already prepare the input list into one chunk for each rank. The main difference is, that all code gets executed by all ranks in the first place, so you must make everything that should be done only by the maste rank 0 conidtional.
if __name__ == '__main__':
comm = MPI.COMM_WORLD
if comm.rank == 0:
random_sample = [CDF_inv(x, E, M)
for x in [random.random() for e in range(n)]]
zeta_list = [get_coor_ellip() for e in range(n)]
x1, y1 = zip(*zeta_list)
zeta_list = np.column_stack((np.array(x1), np.array(y1)))
x = np.linspace(-3, 3, 100)
y = np.linspace(-3, 3, 100)
X, Y = np.meshgrid(x, y)
print len(x)*len(y)*n,'calculations to be carried out.'
M_list = np.array([.001 for i in range(n)])
# split zeta_list, M_list, X, and Y
zeta_list_split = np.array_split(zeta_list, comm.size, axis=0)
M_list_split = np.array_split(M_list, comm.size)
X_list = [X for e in range(comm.size)]
Y_list = [Y for e in range(comm.size)]
work_list = list(zip(zeta_list_split, M_list_split, X_list, Y_list))
else:
work_list = None
my_work = comm.scatter(work_list)
my_alpha = get_alpha(my_work)
alpha_list = comm.gather(my_alpha)
if comm.rank == 0:
alpha_x = 0
alpha_y = 0
for e in alpha_list:
alpha_x += e[0] * 4 * G / (c**2)
alpha_y += e[1] * 4 * G / (c**2)
This works fine as long as each processor gets a similar amount of work. If communication becomes an issue, you might want to split up the data generation among processors instead of doing it all on the master rank 0.
Note: Some things about the code are bogus, e.g. alpha_[xy] ends up as np.ndarray. The serial version runs into an error.
For people who are still interested in similar subjects, I highly recommend having a look at the MPIPoolExecutor() class here and the documentation is here.

Categories

Resources