I am trying to solve the following equation in python using the scipy.odeint function.
Currently I am able to implement this form of the equation
in python using the following script:
def dY(y1, x):
a = 0.001
yin = 1
C = 0.01
N = 1
dC = C/N
b1 = 0
return (a/dC)*(yin-y1)+b1*dC
x = np.linspace(0,20,1000)
y0 = 0
res = odeint(dY, y0, x)
plt.plot(t,res, '-')
plt.show()
My problem with the first equation is 'i'. I don't know how to integrate the equation and still be able to provide the current and previous 'y'(yi-1 and yi) values. 'i' is simply a sequence number that is within a range of 0..100.
Edit 1:
The original equation is:
Which I rewrote using y,x,a,b and C
Edit2:
I edited Pierre de Buyl' code and changed the N value. Luckily I have a validation table to validate the outcome against. Unfortunately, the results are not equal.
Here is my validation table:
and here is the numpy output:
Used code:
def dY(y, x):
a = 0.001
yin = 1
C = 0.01
N = 3
dC = C/N
b1 = 0.01
y_diff = -np.copy(y)
y_diff[0] += yin
y_diff[1:] += y[:-1]
return (a/dC)*(y_diff)+b1*dC
x = np.linspace(0,20,11)
y0 = np.zeros(3)
res = odeint(dY, y0, x)
plt.plot(x,res, '-')
as you can see the values are different by an offset of 0.02..
Am I missing something that results in this offset?
The equation is a "coupled" ordinary differential equation (see "System of ODEs" on Wikipedia.
The variable is a vector containing y[0], y[1], etc. To solve the ODE you must feed a vector as the initial condition and the function dY must return a vector as well.
I have modified your code to achieve this result:
def dY(y, x):
a = 0.001
yin = 1
C = 0.01
N = 1
dC = C/N
b1 = 0
y_diff = -np.copy(y)
y_diff[0] += yin
y_diff[1:] += y[:-1]
return (a/dC)*y_diff+b1*dC
I have written the part y[i-1] - y[i] as a NumPy vector operation and special cased the coordinate y[0] (that is the y1 in your notation but arrays start at 0 in Python).
The solution, using an initial value of 0 for all yi is
x = np.linspace(0,20,1000)
y0 = np.zeros(4)
res = odeint(dY, y0, x)
plt.plot(x,res, '-')
Related
I have a constrained optimization problem where I am trying to minimize an objective function of 100+ variables which is of the form
Min F(x) = f(x1) + f(x2) + ... + f(xn)
Subject to functional constraint
(g(x1) + g(x2) + ... + g(xn))/(f(x1) + f(x2) + ... + f(xn)) - constant >= 0
I also have individual bounds for each variable x1, x2, x3...xn
a <= x1 <= b
c <= x2 <= d
...
For this, I wrote a python script, using the scipy.optimize.minimize implementation with constraints and bounds, but I am unable to fulfill my bounds and constraints in the solutions. These are all cases where optimization could converge to a solution (message: success)
Here is a sample of my code:
df is my pandas dataset
B(x) is LogNorm transform based on x and other constants
Values U, c, lb, ub are pre-calculated constant dictionaries for each index in df
import scipy
df = pd.DataFrame(..)
k = set(df.index.values) ## list of indexes to iterate on
val = 0.25 ## Arbitrary
def obj(x):
fn = 0
for n,i in enumerate(k):
x0 = x[n]
fn1 = (U[i]) * B(x0) * (x0)
fn += fn1
return fn
def cons(x):
cn = 1
c1 = 0
c2 = 0
for n,i in enumerate(k):
x0 = x[n]
c1 += (U[i]) * (B(x0) * (x0 - c[i])
c2 += (U[i]) * (B(x0) * (x0)
cn = c1/(c2)
return cn - val
const = [{'type':'ineq', 'fun':cons}]
bnds = tuple((lb[i], ub[i]) for i in k) ## Lower, Upper for each element ((lb1, ub1), (lb2, ub2)...)
x_init = [lb[i] for i in k] ## for eg. starting from lower bound
## Solution
sol = scipy.optimize.minimize(obj, x_init, method = 'COBYLA', bounds = bnds, constraints = const)
I have more pointed questions if that helps:
Is there a way to construct the same equation concisely/ without the use of loops (given the number of variables could depend on input data and I have no control over it)?
Is there any noticeable issue in my application of bounds? I can't seem to get the final values of all variables follow individual bounds.
Similarly, is there a visible flaw in the construction on constraint equation? My results often DO NOT follow the constraints is repeated runs with different inputs.
Any help with either of the questions can help me progress further at work.
I have also looked into a Lagrangian solution of the same but so far I am unable to solve it for undefined number of (n) variables.
Thanks!
I am trying to evaluate the integral using the Monte Carlo method where the integrand underwent a transformation from cylindrical to cartesian coordinates. The integrand itself is quite simple and could be calculated using scipy.integrate.quad, but I need it to be in cartesian coordinates for a specific purpose later on.
So here is the integrand: rho*k_r**2*(k_r**2*kn(0,k_r*rho)**2+k_n**2*kn(1,k_r*rho)**2) d rho
Here the kn(i,rho) is modified Bessel function of 2nd kind.
Solving it with quad gives the following result:
from scipy.special import kn
from scipy.integrate import quad
import random
k_r = 6.2e-2
k_n = k_r/1.05
C_factor = 2*np.pi*1e5
lmax,lmin = 80,50
def integration_polar():
def K_int(rho):
return rho*k_r**2*(k_r**2*kn(0,k_r*rho)**2+k_n**2*kn(1,k_r*rho)**2)
rho = np.linspace(lmin,lmax,200)
I,_ = quad(K_int,lmin,lmax)
Gamma = I*C_factor
print("expected=",Gamma)
Output: Expected = 7.641648442007296
Now the same integral using Monte Carlo method (hit-or-miss method looked up from here) gives almost the same result:
def integration_polar_MC():
random.seed(1)
n = 100000
def K_int(rho):
return rho*k_r**2*(k_r**2*kn(0,k_r*rho)**2+k_n**2*kn(1,k_r*rho)**2)
def sampler():
x = random.uniform(lmin,lmax)
y = random.uniform(0,c_lim)
return x,y
c_lim = 2*K_int(50) #Upper limit of integrand
sum_I = 0
for i in range(n):
x,y = sampler()
func_Int = K_int(x)
if y>func_Int:
I = 0
elif y<=func_Int:
I = 1
sum_I += I
Gamma = C_factor*(lmax-lmin)*c_lim*sum_I/n
print("MC_integral_polar:",Gamma)
Output: MC_integral_polar = 7.637391399699502
Since Monte Carlo worked with this example, I thought the cartesian case would go smoothly as well but I couldn't get the right answer.
For the cartesian case, similarly as in previous case I've employed the hit-or-miss method, with rho = np.sqrt(x**2+y**2) and integrand becoming k_r**2*(k_r**2*kn(0,k_r*rho)**2+k_n**2*kn(1,k_r*rho)**2) dx dy where domain over x and y:
-80 <= x <= 80
-80 <= y <= 80
50 <= np.sqrt(x**2+y**2) <= 80
Here is my attempt:
def integration_cartesian_MCtry():
random.seed(1)
lmin,lmax = -100,100
n = 100000
def K_int(x,y):
rho = np.sqrt(x**2+y**2)
if rho>=50 and rho<=80:
return k_r**2*(k_r**2*kn(0,k_r*rho)**2+k_n**2*kn(1,k_r*rho)**2)
else:
return 0
def sampler():
x = random.uniform(lmin,lmax)
y = random.uniform(lmin,lmax)
z = random.uniform(0,c_lim)
return x,y,z
c_lim = K_int(50,0)
sum_I = 0
for i in range(n):
x,y,z = sampler()
func_Int = K_int(x,y)
if z>func_Int:
I = 0
elif z<=func_Int:
I = 1
sum_I += I
Gamma = C_factor*(lmax-lmin)**2*c_lim*sum_I/n
print("MC_integral_cartesian:",Gamma)
Output: MC_integral_cartesian = 48.83166430996952
As you can see Monte Carlo in cartesian overestimates the integral. I am not sure why it is happening but think that it may be related to the incorrect limits or domain over which I should integrate the function.
Any help appreciated as I am stuck without any progress for a few days.
Problem, as I said, is with jacobian. In case of polar, you have integration over
f(ρ)*ρ*dρ*dφ
You integrate over dφ analytically (your f(ρ) doesn't depend on φ), and get 2π
In case of cartesian there are no analytical integration, so it is over dx*dy, no factor
of 2π. Code to illustrate it, Python 3.9.1, Windows 10 x64, and it produced pretty much the same answer
import numpy as np
from scipy.special import kn
k_r = 6.2e-2
k_n = k_r/1.05
C_factor = 2*np.pi*1e5
lmin = 50
lmax = 80
def integration_polar_MC(rng, n):
def K_int(rho):
if rho>=50 and rho<=80:
return rho*k_r**2*(k_r**2*kn(0, k_r*rho)**2 + k_n**2*kn(1, k_r*rho)**2)
return 0.0
def sampler():
x = rng.uniform(lmin, lmax)
y = rng.uniform(0.0, c_lim)
return x,y
c_lim = 2*K_int(50) # Upper limit of integrand
sum_I = 0
for i in range(n):
x,y = sampler()
func_Int = K_int(x)
I = 1
if y>func_Int:
I = 0
sum_I += I
Gamma = C_factor*(lmax-lmin)*c_lim*sum_I/n
return Gamma
def integration_cartesian_MC(rng, n):
def K_int(x,y):
rho = np.hypot(x, y)
if rho>=50 and rho<=80:
return k_r**2*(k_r**2*kn(0,k_r*rho)**2+k_n**2*kn(1,k_r*rho)**2)
return 0.0
def sampler():
x = rng.uniform(lmin,lmax)
y = rng.uniform(lmin,lmax)
z = rng.uniform(0,c_lim)
return x,y,z
lmin,lmax = -100,100
c_lim = K_int(50, 0)
sum_I = 0
for i in range(n):
x,y,z = sampler()
func_Int = K_int(x,y)
I = 1
if z>func_Int:
I = 0
sum_I += I
Gamma = C_factor*(lmax-lmin)**2*c_lim*sum_I/n
return Gamma/(2.0*np.pi) # to compensate for 2π in the constant
rng = np.random.default_rng()
q = integration_polar_MC(rng, 100000)
print("MC_integral_polar:", q)
q = integration_cartesian_MC(rng, 100000)
print("MC_integral_cart:", q)
I am having trouble getting my scipy optimization to work, and I think it is because of my constraint. The goal of my code is to find the best x and y coordinates to place a label for a scatterplot point, without this label being close to an old label. My optimization is the Euclidean distance between the scatterplot point and the label point (so that the label is as close as possible to the point). My constraint checks all the used x and y values, and makes sure that the new label x and y values (nx and ny), are not close to any of the old x and y values. If either the x or y is within 1, then the constraint returns 1 instead of 0. This should be the correct way I think because the constraint is an "eq".
However, I am guessing the problem is that the constraint is not a smooth equation, so the optimization has no idea how to handle it.
Is there any alternative way I can solve this problem?
#SECTION 1: Scatterplot: Labeling Points
#optimize (minimize) Euclidean Distance
usedx = []
usedy = []
#bounds
b = (0,10)
bnds = (b,b)
#objective
def objective(x, y):
nx = x[0]
ny = x[1]
ox = y[0]
oy = y[1]
p1 = (ox,oy)
p2 = (nx, ny)
return distance.euclidean(p1, p2)
#constraint
def constraint(p, x, y):
test = 0
nx = p[0]
ny = p[1]
for i in x:
if abs(i - nx) < 1:
test = 1
for i in y:
if abs(i - ny) < 1:
test = 1
return test
const = {'type': 'eq', 'fun': constraint, 'args': (usedx, usedy)}
for i, txt in enumerate(selectedFirmCurrent.sht_fund_name):
originaly = selectedFirmCurrent.iloc[i]['category_score']
originalx = selectedFirmCurrent.iloc[i]['fund_score']
originalpoint = [originalx, originaly]
#initial guess
p0 = (originalx, originaly -.25)
#optimization
solution = minimize(objective, args = originalpoint, x0 = p0,bounds = bnds,constraints = const)
#variable assign
newx = solution.x[0]
newy = solution.x[1]
usedx.append(newx)
usedy.append(newy)
print(originalx, ": ", newx)
print(originaly, ": ", newy+0.25)
#implement
ax.annotate(txt, (originalx,originaly), (newx,newy), arrowprops= dict(arrowstyle = '->'))
I'm working on finding an plotting solutions to the Lane-Emden equation for values n=[0,6], in intervals of 1/2. I'm new to Python, and can't seem to figure out how to use RK4 to make this work. Please help!
Current progress.
TypeError: unsupported operand type(s) for Pow: 'int' and 'list' on line 37 in main.py
The error just appeared after I added in the equations defined as r2, r3, r4 and k2, k3, k4.
import numpy as np
import matplotlib.pyplot as plt
n = [0,1,2,3,4,5,6,7,8,9,10,11,12,13]
theta0 = 1
phi0 = 0
step = 0.01
xi0 = 0
xi_max = 100
theta = theta0
phi = phi0
xi = xi0 + step
Theta = [[],[],[],[],[],[],[],[],[],[],[],[],[],[]]
Phi = [[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]]
Xi = [[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]]
for i in n:
Theta[i].append(theta)
Phi[i].append(phi)
Xi[i].append(xi)
def dTheta_dXi(phi,xi): #r1
return -phi/xi**2
def r2(phi,xi):
return dTheta_dXi(phi+step,xi+step*dTheta_dXi(phi,xi))
def r3(phi,xi):
return dTheta_dXi(phi+step,xi+step*r2(phi,xi))
def r4(phi,xi):
return dTheta_dXi(phi+step,xi+step*r3(phi,xi))
def dPhi_dXi(theta,xi,n): #k1
return theta**(n)*xi**2
def k2(theta,xi,n):
return dPhi_dXi(theta+step,xi+step*dPhi_dXi(theta,xi,n),n)
def k3(theta,xi,n):
return dPhi_dXi(theta+step,xi+step*k2(theta,xi,n),n)
def k4(theta,xi,n):
return dPhi_dXi(theta+step,xi+step*k3(theta,xi,n),n)
for i in n:
while xi < xi_max:
if theta < 0:
break
dTheta = (step/6)*(dTheta_dXi(phi,xi)+2*r2(phi,xi)+2*r3(phi,xi)+r4(phi,xi))
dPhi = (step/6)*(dPhi_dXi(theta,xi,i/2.)+2*k2(theta,xi,n)+2*k3(theta,xi,n)+k4(theta,xi,n))
theta += dTheta
phi += dPhi
xi += step
Theta[i].append(theta)
Phi[i].append(phi)
Xi[i].append(xi)
print i/2., round(xi,4), round(dTheta_dXi(phi,xi),4), round(xi/3./dTheta_dXi(phi,xi),4), round(1./(4*np.pi*(i/2.+1))/dTheta_dXi(phi,xi)**2,4)
theta = theta0
phi = phi0
xi = xi0 + step
Using RK4 for coupled systems
If one understands a first-order system as a vector-valued system working with vector states, then the scalar version of RK4
k1 = f(x,y)
k2 = f(x+0.5*h, y+0.5*h*k1)
k3 = f(x+0.5*h, y+0.5*h*k2)
k4 = f(x+h, y+h*k3)
x,y = x+h, y+h/6*(k1+2*k2+2*k3+k4)
can also directly be used for the vector case. Sometimes it appears educational to implement this component-wise. While in mathematical texts it is preferred to use one-letter variable names, possibly with sub- or superscripts, the variables in the code of programs usually are multi-lettered. So instead of r2 and k2 it would be more descriptive to use k2_Theta and k2_Phi.
Then it becomes rather intuitive that the state used to evaluate the k3 components has the arguments theta+0.5*step*k2_Theta and phi+0.5*step*k2_Phi.
k2_Xi etc. is always 1 for the independent variable, so the value for the 3rd stage is simply xi+0.5*step.
Implementation details RK4
The values of k1 etc. are fixed inside the step and result from the evaluation of the derivatives function. It makes absolutely no sense to declare them as functions themselves. That is, the RK4 step specialized to this situation becomes just
def RK4_update(theta, phi, xi, step, n):
k1_Theta = dTheta_dXi(phi, xi)
k1_Phi = dPhi_dXi(theta, xi, n)
k2_Theta = dTheta_dXi(phi+0.5*step*k1_Phi, xi+0.5*step)
k2_Phi = dPhi_dXi(theta+0.5*step*k1_Theta, xi+0.5*step, n)
k3_Theta = dTheta_dXi(phi+0.5*step*k2_Phi, xi+0.5*step)
k3_Phi = dPhi_dXi(theta+0.5*step*k2_Theta, xi+0.5*step, n)
k4_Theta = dTheta_dXi(phi+step*k3_Phi, xi+step)
k4_Phi = dPhi_dXi(theta+step*k3_Theta, xi+step, n)
dTheta = (step/6)*(k1_Theta+2*k2_Theta+2*k3_Theta+k4_Theta)
dPhi = (step/6)*(k1_Phi+2*k2_Phi+2*k3_Phi+k4_Phi)
return dTheta, dPhi
On the singularity of the Lane-Emden equation
For the solution to exist at xi=0 one needs at least that phi ~ xi^k with k>=2. This gives that theta is almost constant, which in turn leads to an integration phi = theta0^n*xi^3/3 which then in the other equation gives theta = theta0 - theta0^n*xi^2/6. This allows to take the first step away from the singularity without using the numerical method.
xi = step
theta, phi = theta0 - theta0**n*xi**2/6, theta0**n*xi**3/3
Xi[i] = [0, xi]
Theta[i] = [theta0, theta]
Phi[i] = [0, phi]
Then the main loop can be written as
for i in range(N):
n = i/2
xi = step
theta, phi = theta0 - theta0**n*xi**2/6, theta0**n*xi**3/3
Xi[i] = [0, xi]
Theta[i] = [theta0, theta]
Phi[i] = [0, phi]
while xi < xi_max:
if theta < 0:
break
dTheta, dPhi = RK4_update(theta,phi,xi,step,n)
theta += dTheta
phi += dPhi
xi += step
Theta[i].append(theta)
Phi[i].append(phi)
Xi[i].append(xi)
Then plotting with
for i in range(N):
plt.plot(Xi[i],Theta[i], label=f"n={i/2}")
plt.grid(); plt.legend(); plt.show()
results in
Trick used: To avoid rational powers of negative values, replace theta**n with theta*abs(theta)**(n-1) or similar continuations.
Old contents
You should once again explore what update goes where. xi is the independent variable and thus only gets updates 0.5*step and step, the updates of theta use the derivatives dTheta_dXi and similarly phi is updated using the slopes dPhi_dXi
def r2(phi,xi):
return dTheta_dXi(phi+0.5*step*dPhi_dXi(theta,xi,n),xi+0.5*step)
def k2(theta,xi,n):
return dPhi_dXi(theta+0.5*step*dTheta_dXi(phi,xi),xi+0.5*step,n)
def r3(phi,xi):
return dTheta_dXi(phi+0.5*step*k2(theta,xi,n),xi+0.5*step)
etc.
Now one can see that due to the coupled nature of the equation you need both theta and phi as arguments everywhere. Further, even if that works in the end you end up computing many of the values multiple times where assembling everything in one loop only requires one computation.
i'm currently incredibly stuck on what isn't working in my code and have been staring at it for hours. I have created some functions to approximate the solution to the laplace equation adaptively using the finite element method then estimate it's error using the dual weighted residual. The error function should give a vector of errors (one error for each element), i then choose the biggest errors, add more elements around them, solve again and then recheck the error; however i have no idea why my error estimate isn't changing!
My first 4 functions are correct but i will include them incase someone wants to try the code:
def Poisson_Stiffness(x0):
"""Finds the Poisson equation stiffness matrix with any non uniform mesh x0"""
x0 = np.array(x0)
N = len(x0) - 1 # The amount of elements; x0, x1, ..., xN
h = x0[1:] - x0[:-1]
a = np.zeros(N+1)
a[0] = 1 #BOUNDARY CONDITIONS
a[1:-1] = 1/h[1:] + 1/h[:-1]
a[-1] = 1/h[-1]
a[N] = 1 #BOUNDARY CONDITIONS
b = -1/h
b[0] = 0 #BOUNDARY CONDITIONS
c = -1/h
c[N-1] = 0 #BOUNDARY CONDITIONS: DIRICHLET
data = [a.tolist(), b.tolist(), c.tolist()]
Positions = [0, 1, -1]
Stiffness_Matrix = diags(data, Positions, (N+1,N+1))
return Stiffness_Matrix
def NodalQuadrature(x0):
"""Finds the Nodal Quadrature Approximation of sin(pi x)"""
x0 = np.array(x0)
h = x0[1:] - x0[:-1]
N = len(x0) - 1
approx = np.zeros(len(x0))
approx[0] = 0 #BOUNDARY CONDITIONS
for i in range(1,N):
approx[i] = math.sin(math.pi*x0[i])
approx[i] = (approx[i]*h[i-1] + approx[i]*h[i])/2
approx[N] = 0 #BOUNDARY CONDITIONS
return approx
def Solver(x0):
Stiff_Matrix = Poisson_Stiffness(x0)
NodalApproximation = NodalQuadrature(x0)
NodalApproximation[0] = 0
U = scipy.sparse.linalg.spsolve(Stiff_Matrix, NodalApproximation)
return U
def Dualsolution(rich_mesh,qoi_rich_node): #BOUNDARY CONDITIONS?
"""Find Z from stiffness matrix Z = K^-1 Q over richer mesh"""
K = Poisson_Stiffness(rich_mesh)
Q = np.zeros(len(rich_mesh))
Q[qoi_rich_node] = 1.0
Z = scipy.sparse.linalg.spsolve(K,Q)
return Z
My error indicator function takes in an approximation Uh, with the mesh it is solved over, and finds eta = (f - Bu)z.
def Error_Indicators(Uh,U_mesh,Z,Z_mesh,f):
"""Take in U, Interpolate to same mesh as Z then solve for eta vector"""
u_inter = interp1d(U_mesh,Uh) #Interpolation of old mesh
U2 = u_inter(Z_mesh) #New function u for the new mesh to use in
Bz = Poisson_Stiffness(Z_mesh)
Bz = Bz.tocsr()
eta = np.empty(len(Z_mesh))
for i in range(len(Z_mesh)):
for j in range(len(Z_mesh)):
eta[i] += (f[i] - Bz[i,j]*U2[j])
for i in range(len(Z)):
eta[i] = eta[i]*Z[i]
return eta
My next function seems to adapt the mesh very well to the given error indicator! Just no idea why the indicator seems to stay the same regardless?
def Mesh_Refinement(base_mesh,tolerance,refinement,z_mesh,QOI_z_mesh):
"""Solve for U on a normal mesh, Take in Z, Find error indicators, adapt. OUTPUT NEW MESH"""
New_mesh = base_mesh
Z = Dualsolution(z_mesh,QOI_z_mesh) #Solve dual solution only once
f = np.empty(len(z_mesh))
for i in range(len(z_mesh)):
f[i] = math.sin(math.pi*z_mesh[i])
U = Solver(New_mesh)
eta = Error_Indicators(U,base_mesh,Z,z_mesh,f)
while max(abs(k) for k in eta) > tolerance:
orderedeta = np.sort(eta) #Sort error indicators LENGTH 40
biggest = np.flipud(orderedeta[int((1-refinement)*len(eta)):len(eta)])
position = np.empty(len(biggest))
ratio = float(len(New_mesh))/float(len(z_mesh))
for i in range(len(biggest)):
position[i] = eta.tolist().index(biggest[i])*ratio #GIVES WHAT NUMBER NODE TO REFINE
refine = np.zeros(len(position))
for i in range(len(position)):
refine[i] = math.floor(position[i])+0.5 #AT WHAT NODE TO PUT NEW ELEMENT 5.5 ETC
refine = np.flipud(sorted(set(refine)))
for i in range(len(refine)):
New_mesh = np.insert(New_mesh,refine[i]+0.5,(New_mesh[refine[i]+0.5]+New_mesh[refine[i]-0.5])/2)
U = Solver(New_mesh)
eta = Error_Indicators(U,New_mesh,Z,z_mesh,f)
print eta
An example input for this would be:
Mesh_Refinement(np.linspace(0,1,3),0.1,0.2,np.linspace(0,1,60),20)
I understand there is alot of code here but i am at a loss, i have no idea where to turn!
Please consider this piece of code from def Error_Indicators:
eta = np.empty(len(Z_mesh))
for i in range(len(Z_mesh)):
for j in range(len(Z_mesh)):
eta[i] = (f[i] - Bz[i,j]*U2[j])
Here you override eta[i] each j iteration, so the inner cycle proves useless and you can go directly to the last possible j. Did you mean to find a sum of the (f[i] - Bz[i,j]*U2[j]) series?
eta = np.empty(len(Z_mesh))
for i in range(len(Z_mesh)):
for j in range(len(Z_mesh)):
eta[i] += (f[i] - Bz[i,j]*U2[j])