I tried to implement a weighted correlation function based on the article in the link, formula number 2:
http://staff.ustc.edu.cn/~lshao/papers/paper07.pdf
let suppose to have 3 vectors s, r and w each of n elements.
The vector w is obtained from the following formulas:
w = |r|/(1+D)
D = |s - k*r|
k = (r_Transpose * s)/(r_Transpose*r)
I would like to implement the formula for the weighted correlation function described on the article. It is correct my implementation?
I start from a matrix of dimension [224,640] which means that i have 640 vectors of 224 elements. I would like to calculated the weighted correlation coefficient between those 640 vector respect one other vector - r. Every one of those 640 vector is the vector s.
ref = reference
ref_mean = np.mean(ref) # calcolo il valore medio dello spettro di riferimento
sens = 190
frame_correlation = np.zeros((1,640))
img_correlation = np.zeros((nf,npixels))
for i in range(nf):
frame_test = dati_new[:,:,i] Selection of one matrix from a cell of matrices
for j in range(npixels):
spettro_test = frame_test[:,j] # is my vector s
spettro_test = np.reshape(spettro_test,(224,1))
spettro_test_mean = np.mean(spettro_test)
k = np.dot(np.transpose(ref),spettro_test)/np.dot(np.transpose(ref),ref)
k = k[0][0]
D = np.abs(spettro_test - k*ref)
W = np.abs(ref)/(1+D)
# NUMERATOR OF FORMULA IN THE ARTICLE
numeratore = np.sum(W*(spettro_test - spettro_test_mean)*(ref - ref_mean))
# DENOMINATOR
den1_ex = np.sqrt(np.sum(W*np.power(spettro_test - spettro_test_mean,2)))
den2_ex = np.sqrt(np.sum(W*np.power(ref - ref_mean,2)))
denominatore = den1_ex * den2_ex
rho = numeratore/denominatore
if rho < 0:
rho = 0
if rho > 1: # for safety reason
rho = 1
if rho >=0.99:
rho = (sens*rho)/100
frame_correlation[:,j]= rho
img_correlation[i,:] = frame_correlation
this is the code i wrote in order to implement the Weighted correlation function between two array, selected from a matrix.
ref = reference
ref_mean = np.mean(ref)
sens = 190
nf = n #number of matrices
frame_correlation = np.zeros((1,640))
img_correlation = np.zeros((nf,npixels))
for i in range(nf):
frame_test = dati_new[:,:,i] #dati_new is a 3D structure made of nf matrices
for j in range(npixels):
spettro_test = frame_test[:,j]
spettro_test = np.reshape(spettro_test,(224,1))
spettro_test_mean = np.mean(spettro_test)
# calcolo del peso per lo spettro selezionato
k = np.dot(np.transpose(ref),spettro_test)/np.dot(np.transpose(ref),ref)
k = k[0][0]
D = np.abs(spettro_test - k*ref)
W = np.abs(ref)/(1+D)
# Definizione del numeratore del coefficiente di correlazione
numeratore = np.sum(W*(spettro_test - spettro_test_mean)*(ref - ref_mean))
# Definizione del denominatore del coefficiente di correlazione
den1_ex = np.sqrt(np.sum(W*np.power(spettro_test - spettro_test_mean,2)))
den2_ex = np.sqrt(np.sum(W*np.power(ref - ref_mean,2)))
denominatore = den1_ex * den2_ex
rho = numeratore/denominatore
if rho < 0:
rho = 0
if rho > 1: # just in case
rho = 1
if rho >=0.998:
rho = (sens*rho)/100
frame_correlation[:,j]= rho
img_correlation[i,:] = frame_correlation
img_correlation = np.array(img_correlation)
fig, ax=plt.subplots()
ax.imshow(img_correlation,cmap="gray", origin="lower")
plt.title('correlation coefficient image')
plt.xlabel("Pixels")
plt.ylabel("Number of frames")
plt.show()
Related
I have quite a big project I want to solve with GEKKO. It consists of quite a large number of partial differential equations, and I have a function that uses an iterative process to calculate steady state "leak". However, GEKKO runs the function only during initialization. I want GEKKO to solve this task by taking into account that function. It would be really hard to write this function in GEKKOs equations. But without GEKKO it would be really hard to solve the Partial Differential Equations. So I am stuck, I would appreciate any help.
Here is a simple example I want to implement
a = 0.01
def CalculateLeak(P,a):
a = a - P*0.1
print("Leak is calculated")
return a
m = GEKKO(remote = False)
tf = 10
nt = int(tf/1) + 1
m.time = np.linspace(0,tf,nt)
P = m.Var(0.1)
m.Equation(P.dt() == P*0.1 - CalculateLeak(P,a))
m.options.IMODE = 7
m.solve(disp = False)
print("Finished")
print(a)
Below is the function I actually want to add to my project. It calculates the adsorption amount based on P (pressure), T (temperature), y1,y2,y3,y4,y5,y6 are molar fractions. All these variables should come from the partial differential equations solved in time by GEKKO (probably except temperature which can be assumed constant some time). Every time iteration this function would calculate the amount of gas adsorbed and return to Partial Diff. Eq. as a source term.
import numpy as np
import matplotlib.pyplot as plt
def FastIAST(P_gas,T,y1,y2,y3,y4,y5,y6):
IP1_CH4 = 0.0016 #kmol/kg
IP2_CH4 = 0 #1/bar
IP3_CH4 = 4.2E-05 #1/bar
IP4_CH4 = 2922.78 #K
IP1_C2H6 = 0.0027 #kmol/kg
IP2_C2H6 = 0.0 #1/bar
IP3_C2H6 = 2.66E-04 #1/bar
IP4_C2H6 = 2833.77 # K
IP1_C3H8 = 0.0062 #kmol/kg
IP2_C3H8 = 0.0 #1/bar
IP3_C3H8 = 3.75E-04 #1/bar
IP4_C3H8 = 2795.28 #K
IP1_C4H10 = 0.007 #kmol/kg
IP2_C4H10 = 0.0 #1/bar
IP3_C4H10 = 0.0015 #1/bar
IP4_C4H10 = 2600 #K
IP1_CO2 = 0.0028 #kmol/kg
IP2_CO2 = 0.0 #(kmol/kg)/bar
IP3_CO2 = 0.000748 #1/bar
IP4_CO2 = 2084.44 #K
IP1_N2 = 0.0075 #kmol/kg
IP2_N2 = 0.0 #(kmol/kg)/bar
IP3_N2 = 0.00099 #1/bar
IP4_N2 = 935.77 #K
Q1 = IP1_CH4 - IP2_CH4*T # Isotherm max capacity CH4
Q2 = IP1_C2H6 - IP2_C2H6*T # Isotherm max capacity C2H6
Q3 = IP1_C3H8 - IP2_C3H8*T # Isotherm max capacity C3H8
Q4 = IP1_C4H10 - IP2_C4H10*T # Isotherm max capacity C4H10
Q5 = IP1_CO2 - IP2_CO2*T # Isotherm max capacity CO2
Q6 = IP1_N2 - IP2_N2*T # Isotherm max capacity N2
b1 = IP3_CH4*np.exp(IP4_CH4/T) # Isotherm affinity coeff. CH4
b2 = IP3_C2H6*np.exp(IP4_C2H6/T) # Isotherm affinity coeff. C2H6
b3 = IP3_C3H8*np.exp(IP4_C3H8/T) # Isotherm affinity coeff. C3H8
b4 = IP3_C4H10*np.exp(IP4_C4H10/T) # Isotherm affinity coeff. C4H10
b5 = IP3_CO2*np.exp(IP4_CO2/T) # Isotherm affinity coeff. CO2
b6 = IP3_N2*np.exp(IP4_N2/T) # Isotherm affinity coeff. N2
error = 0 # 1 - there was an error in the programm, 0 - OK
N = 6 # Number of components
#Langmuir Isotherm
SingleComponentCapacity = np.array([Q1,Q2,Q3,Q4,Q5,Q6]) #Langmuir Isotherm capacity of every component
AffinityCoefficient = np.array([b1,b2,b3,b4,b5,b6]) #Langmuir Affinity Coefficient of every component
fractionGas = np.array([y1,y2,y3,y4,y5,y6]) #Gas fraction of every component
#Initialization
fastiastGraphConcentration = np.zeros(N)
fastiastGraphFraction = np.zeros(N)
fastiastPressure = 0
adsorbedFraction = np.zeros(N)
adsorbedConcentration = np.zeros(N)
#Checking..
if (len(fractionGas) < N or len(SingleComponentCapacity) < N or len(AffinityCoefficient) < N):
print("You have the incorrect number of components")
error = 1
if np.sum(fractionGas) < 0.95 or np.sum(fractionGas) > 1.05:
error = 1
print("The molar fractions sum is not equal to 1")
###Calculation###
kappa_old = np.zeros(N)
delta_kappa = np.ones(N)
kappa = np.zeros(N)
CmuT = 0
partialPressureComponents = fractionGas*P_gas
for k in range(N):
CmuT += SingleComponentCapacity[k]*AffinityCoefficient[k]*partialPressureComponents[k]
for k in range(N):
kappa[k] = CmuT/(SingleComponentCapacity[k])
i = 0
while np.any((delta_kappa) > 1e-4):
f = np.zeros(N)
fDerivative = np.zeros(N)
g = np.zeros(N)
sigma = np.zeros(N)
phi = np.zeros((N,N))
phi = np.matrix(phi)
for k in range(N):
f[k] = SingleComponentCapacity[k]*(np.log(1+kappa[k]))
fDerivative[k] = SingleComponentCapacity[k]*(1/(1+kappa[k]))
for k in range(N-1):
g[k] = f[k] - f[k+1]
for k in range(N):
g[N-1] += AffinityCoefficient[k]*partialPressureComponents[k]/kappa[k]
g[N-1] = g[N-1] - 1
for k in range(N-1):
phi[k,k] = fDerivative[k]
phi[k,k+1] = -fDerivative[k+1]
for k in range(0,N):
phi[N-1,k] = - (AffinityCoefficient[k]*partialPressureComponents[k]/(kappa[k]**2))
sigma = np.linalg.solve(phi, g)
kappa_old = kappa
kappa = kappa_old - sigma
delta_kappa = np.abs(kappa-kappa_old)
i += 1
if i > 20 or np.any(kappa<0):
print("No convergence")
error = 1
break
if np.any(kappa < 0):
print("No convergence")
error = 1
break
adsorbedFraction = partialPressureComponents*AffinityCoefficient/kappa
adsorbedConcentrationPure = SingleComponentCapacity*(kappa
/(1+kappa))
C_total = 0
for k in range(0,N):
C_total += ( (adsorbedFraction[k]) / adsorbedConcentrationPure[k])
C_total = 1/C_total
adsorbedConcentration = C_total*adsorbedFraction
fastiastGraphConcentration=np.vstack((fastiastGraphConcentration, adsorbedConcentration))
fastiastGraphFraction=np.vstack((fastiastGraphFraction, adsorbedFraction))
fastiastPressure=np.vstack((fastiastPressure, P_gas))
if error == 0:
###Result###
return(fastiastGraphConcentration[1,:])
else:
return(0)
FastIAST(2,298,1,0,0,0,0,0)
There is no current method to call external "black-box" functions with Gekko. One of the reasons that Gekko performs well is that it compiles the functions to byte-code as if they were written in FORTRAN or C++ and it uses automatic differentiation to provide sparse 1st and 2nd derivatives to the solvers. One work-around is to use a c-spline (1D) or b-spline (2D) to approximate the function if there are only one or two independent variables. The simple problem would qualify but the FastIAST has 8 independent variables so that approach wouldn't work. There is also the deep learning library in Gekko to approximate functions of any dimension, but it may be more difficult to control the approximation error. There are new developments coming that may allow external function calls and interfaces to other machine learning libraries that would allow function approximations. As of Gekko v1.0.4, external black-box function calls aren't possible. Python function calls are allowed such as:
from gekko import GEKKO
m = GEKKO()
def f(x,c):
y = m.sum([(xi-c)**2 for xi in x])
return y
x1 = m.Array(m.Var,5)
p = 2.1
m.Minimize(f(x1,p))
m.Equation(f(x1,0)<=10)
m.solve()
print(x1)
Need help finishing the Code specifically steps 3 and 4.
Problem
Compute the temperature (K) profile throughout a cylindrical SiGe wire with thermal conductivity k = 4.2 W/(m*K), length L = 0.05 m, and radius R = 0.005 m.
The boundary conditions are given in the figure below. The solid lines correspond to zero-flux boundary conditions, the long-dashed line to open boundaries with a known temperature, and the short-dashed lines to open boundaries with known flux via convection (the listed T is the “ambient” T). In this figure, assume z (in m) varies in the horizontal direction while r (in m) varies in the vertical direction (the upper left corner is the origin).
Justify your approach. Plot the temperature distribution throughout the wire using a 2D color- map with proper labels. Include contour lines.
https://www.chegg.com/homework-help/questions-and-answers/problem-2-compute-temperature-k-profile-throughout-cylindrical-sige-wire-thermal-conductiv-q96105385
Code
# Problem 2
# Import the required modules
import numpy as np
import matplotlib.pyplot as plt
# Constants
k = 4.2 # Thermal conductivity in W/(m.K)
L = 0.05 # Length in m
R = 0.005 # Radius in m
T = 575 # Ambient temp. in K
T1 = 423 # K
h1 = 45 # kW/(m^2.K)
T2 = 348 # K
h2 = 650 # kW/(m^2.K)
Ta = 298 # K
h = 7.5 # kW/(m^2.K)
# Iteration parameters
maxit = 2000
tol = 0.0001 # Relative tolerance
merr = 1e5
lam = 1.4 # Parameter for convergence rate
# Setup grid
dr = 0.01
nr = int(R/dr) + 1
nz = int(L/dr) + 1
rr = np.linspace(0,R,num=nr,endpoint=True);
zz = np.linspace(0,L,num=nz,endpoint=True);
# Step 1 - Initial Guesses
M = np.ones((nz,nr)) # Create 2D array w/ ones (z = # of rows, r = # of col)
M = M*T # Matrix of T now
# Step 2 - Apply Boundary Conditions
M[0,0:nr] = T; M[-1,0:nr] = T1;
M[0:nz,0] = T2; M[0:nz,-1] = Ta;
# Step 3 and 4 - Apply LDE, walking over nodes
cc = 0; # Counter
a = k*dr*dr/(4*R)
while merr > tol:
Mold = np.copy(M) # Save current values to old
M[-1,-1]=(2*M[-2,-1]+2*M[-1,-2])/(4+a) # Corner
for j in range(1,nz-1):
M[j,-1]=(2*M[j,-2]+M[j-1,-1]+M[j+1,-1])/(4+a)
for i in range(1,nr-1):
M[-1,i]=(M[-1,i-1]+M[-1,i+1]+2*M[-2,i])/(4+a)
for i in range(1,70):
M[0,i]=(M[0,i-1]+M[0,i+1]+2*M[1,i])/(4+a)
for j in range(1,nz-1):
for i in range(1,nr-1):
M[j,i] = (M[j,i-1]+M[j,i+1]+M[j-1,i]+M[j+1,i])/(4+a)
M = lam*M+(1-lam)*Mold # Adjust for convergence rate
ea = np.abs((M-Mold)/Mold)
cc = cc + 1;
merr = np.max(ea)
# Plot color mesh
X, Y = np.meshgrid(rr, zz)
p = plt.pcolormesh(rr, zz, M, cmap="RdBu", shading="flat", vmin=0, vmax=100)
ct = plt.contour(X, Y, M, cmap="gray", levels=10, vmin=0, vmax=100)
c = plt.colorbar(p)
plt.xlabel("R (m)")
plt.ylabel("Z (m)")
c.set_label("Teamperature (K)")
plt.show()
# Print Results
print("Converged in %d iterations" % cc)
print("Max error is %f" % merr)
print("Mean Teamperature along central axis %f K" %(np.mean(M[:,0])))
I am trying to make my own CFD solver and one of the most computationally expensive parts is solving for the pressure term. One way to solve Poisson differential equations faster is by using a multigrid method. The basic recursive algorithm for this is:
function phi = V_Cycle(phi,f,h)
% Recursive V-Cycle Multigrid for solving the Poisson equation (\nabla^2 phi = f) on a uniform grid of spacing h
% Pre-Smoothing
phi = smoothing(phi,f,h);
% Compute Residual Errors
r = residual(phi,f,h);
% Restriction
rhs = restriction(r);
eps = zeros(size(rhs));
% stop recursion at smallest grid size, otherwise continue recursion
if smallest_grid_size_is_achieved
eps = smoothing(eps,rhs,2*h);
else
eps = V_Cycle(eps,rhs,2*h);
end
% Prolongation and Correction
phi = phi + prolongation(eps);
% Post-Smoothing
phi = smoothing(phi,f,h);
end
I've attempted to implement this algorithm myself (also at the end of this question) however it is very slow and doesn't give good results so evidently it is doing something wrong. I've been trying to find why for too long and I think it's just worthwhile seeing if anyone can help me.
If I use a grid size of 2^5 by 2^5 points, then it can solve it and give reasonable results. However, as soon as I go above this it takes exponentially longer to solve and basically get stuck at some level of inaccuracy, no matter how many V-Loops are performed. at 2^7 by 2^7 points, the code takes way too long to be useful.
I think my main issue is that my implementation of a jacobian iteration is using linear algebra to calculate the update at each step. This should, in general, be fast however, the update matrix A is an n*m sized matrix, and calculating the dot product of a 2^7 * 2^7 sized matrix is expensive. As most of the cells are just zeros, should I calculate the result using a different method?
if anyone has any experience in multigrid methods, I would appreciate any advice!
Thanks
my code:
# -*- coding: utf-8 -*-
"""
Created on Tue Dec 29 16:24:16 2020
#author: mclea
"""
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import convolve2d
from mpl_toolkits.mplot3d import Axes3D
from scipy.interpolate import griddata
from matplotlib import cm
def restrict(A):
"""
Creates a new grid of points which is half the size of the original
grid in each dimension.
"""
n = A.shape[0]
m = A.shape[1]
new_n = int((n-2)/2+2)
new_m = int((m-2)/2+2)
new_array = np.zeros((new_n, new_m))
for i in range(1, new_n-1):
for j in range(1, new_m-1):
ii = int((i-1)*2)+1
jj = int((j-1)*2)+1
# print(i, j, ii, jj)
new_array[i,j] = np.average(A[ii:ii+2, jj:jj+2])
new_array = set_BC(new_array)
return new_array
def interpolate_array(A):
"""
Creates a grid of points which is double the size of the original
grid in each dimension. Uses linear interpolation between grid points.
"""
n = A.shape[0]
m = A.shape[1]
new_n = int((n-2)*2 + 2)
new_m = int((m-2)*2 + 2)
new_array = np.zeros((new_n, new_m))
i = (np.indices(A.shape)[0]/(A.shape[0]-1)).flatten()
j = (np.indices(A.shape)[1]/(A.shape[1]-1)).flatten()
A = A.flatten()
new_i = np.linspace(0, 1, new_n)
new_j = np.linspace(0, 1, new_m)
new_ii, new_jj = np.meshgrid(new_i, new_j)
new_array = griddata((i, j), A, (new_jj, new_ii), method="linear")
return new_array
def adjacency_matrix(rows, cols):
"""
Creates the adjacency matrix for an n by m shaped grid
"""
n = rows*cols
M = np.zeros((n,n))
for r in range(rows):
for c in range(cols):
i = r*cols + c
# Two inner diagonals
if c > 0: M[i-1,i] = M[i,i-1] = 1
# Two outer diagonals
if r > 0: M[i-cols,i] = M[i,i-cols] = 1
return M
def create_differences_matrix(rows, cols):
"""
Creates the central differences matrix A for an n by m shaped grid
"""
n = rows*cols
M = np.zeros((n,n))
for r in range(rows):
for c in range(cols):
i = r*cols + c
# Two inner diagonals
if c > 0: M[i-1,i] = M[i,i-1] = -1
# Two outer diagonals
if r > 0: M[i-cols,i] = M[i,i-cols] = -1
np.fill_diagonal(M, 4)
return M
def set_BC(A):
"""
Sets the boundary conditions of the field
"""
A[:, 0] = A[:, 1]
A[:, -1] = A[:, -2]
A[0, :] = A[1, :]
A[-1, :] = A[-2, :]
return A
def create_A(n,m):
"""
Creates all the components required for the jacobian update function
for an n by m shaped grid
"""
LaddU = adjacency_matrix(n,m)
A = create_differences_matrix(n,m)
invD = np.zeros((n*m, n*m))
np.fill_diagonal(invD, 1/4)
return A, LaddU, invD
def calc_RJ(rows, cols):
"""
Calculates the jacobian update matrix Rj for an n by m shaped grid
"""
n = int(rows*cols)
M = np.zeros((n,n))
for r in range(rows):
for c in range(cols):
i = r*cols + c
# Two inner diagonals
if c > 0: M[i-1,i] = M[i,i-1] = 0.25
# Two outer diagonals
if r > 0: M[i-cols,i] = M[i,i-cols] = 0.25
return M
def jacobi_update(v, f, nsteps=1, max_err=1e-3):
"""
Uses a jacobian update matrix to solve nabla(v) = f
"""
f_inner = f[1:-1, 1:-1].flatten()
n = v.shape[0]
m = v.shape[1]
A, LaddU, invD = create_A(n-2, m-2)
Rj = calc_RJ(n-2,m-2)
update=True
step = 0
while update:
v_old = v.copy()
step += 1
vt = v_old[1:-1, 1:-1].flatten()
vt = np.dot(Rj, vt) + np.dot(invD, f_inner)
v[1:-1, 1:-1] = vt.reshape((n-2),(m-2))
err = v - v_old
if step == nsteps or np.abs(err).max()<max_err:
update=False
return v, (step, np.abs(err).max())
def MGV(f, v):
"""
Solves for nabla(v) = f using a multigrid method
"""
# global A, r
n = v.shape[0]
m = v.shape[1]
# If on the smallest grid size, compute the exact solution
if n <= 6 or m <=6:
v, info = jacobi_update(v, f, nsteps=1000)
return v
else:
# smoothing
v, info = jacobi_update(v, f, nsteps=10, max_err=1e-1)
A = create_A(n, m)[0]
# calculate residual
r = np.dot(A, v.flatten()) - f.flatten()
r = r.reshape(n,m)
# downsample resitdual error
r = restrict(r)
zero_array = np.zeros(r.shape)
# interploate the correction computed on a corser grid
d = interpolate_array(MGV(r, zero_array))
# Add prolongated corser grid solution onto the finer grid
v = v - d
v, info = jacobi_update(v, f, nsteps=10, max_err=1e-6)
return v
sigma = 0
# Setting up the grid
k = 6
n = 2**k+2
m = 2**(k)+2
hx = 1/n
hy = 1/m
L = 1
H = 1
x = np.linspace(0, L, n)
y = np.linspace(0, H, m)
XX, YY = np.meshgrid(x, y)
# Setting up the initial conditions
f = np.ones((n,m))
v = np.zeros((n,m))
# How many V cyles to perform
err = 1
n_cycles = 10
loop = True
cycle = 0
# Perform V cycles until converged or reached the maximum
# number of cycles
while loop:
cycle += 1
v_new = MGV(f, v)
if np.abs(v - v_new).max() < err:
loop = False
if cycle == n_cycles:
loop = False
v = v_new
print("Number of cycles " + str(cycle))
plt.contourf(v)
I realize that I'm not answering your question directly, but I do note that you have quite a few loops that will contribute some overhead cost. When optimizing code, I have found the following thread useful - particularly the line profiler thread. This way you can focus in on "high time cost" lines and then start to ask more specific questions regarding opportunities to optimize.
How do I get time of a Python program's execution?
Given a set of terms ||(p_i' - sum{w_ji*(R_j*p_i+v_j)})||^2, where ||...||^2 denotes the squared norm, I want to efficiently set up an array (or a list) in Python filled with these terms. p_i', p_i, v_j are three-dimensional vectors, and R_j is a 3x3 matrix.
I've already tried this but I don't know how to incorporate the sum over j.
new_points = r_mesh.points() # p', return Nx3 array
old_points = avg_mesh.points() # p
n_joints = 3
rv = np.arange(n_joints * 15) # R_j and v_j are stored in rv
weights = np.random.rand(n_joints, len(new_points)) # w
func = [[np.linalg.norm(
new_points[i] - (weights[j, i] * ((np.array(rv[j * 15:j * 15 + 9]).reshape(3, 3) # old_points[i]) + np.array(
rv[j * 9 + 9: j * 9 + 12])))) for j in range(n_joints)] for i in range(len(new_points))]
To make things clearer here is the original equation that I transformed into a non-linear function as to feed it to the Levenberg-Marquardt method.
EDIT: I'm sorry, before, there was a wrong image.
The simplest ("auto pilot", no actual thinking required) method would be np.einsum:
# set up example:
n_i, n_j = 20, 30
p = np.random.random((n_i, 3))
pp = np.random.random((n_i, 3))
R = np.random.random((n_j, 3, 3))
w = np.random.random((n_j, n_i))
v = np.random.random((n_j, 3))
# now just tell einsum which index is where and let it
# do its magic
# R_j p_i
Rp = np.einsum('jkl,il', R,p)
# by Einstein convention this will sum over l,
# so Rp has indices ijk
# w_ji (Rp_ij + v_j)
wRpv = np.einsum('ji,ijk->ik', w,Rp+v)
# pure Einstein convention would sum over i and j,
# we override this by passing explicit output indices
# ik to keep i alive
# squared norm
d = pp - wRpv
result = np.einsum('ik,ik', d,d)
i'm currently incredibly stuck on what isn't working in my code and have been staring at it for hours. I have created some functions to approximate the solution to the laplace equation adaptively using the finite element method then estimate it's error using the dual weighted residual. The error function should give a vector of errors (one error for each element), i then choose the biggest errors, add more elements around them, solve again and then recheck the error; however i have no idea why my error estimate isn't changing!
My first 4 functions are correct but i will include them incase someone wants to try the code:
def Poisson_Stiffness(x0):
"""Finds the Poisson equation stiffness matrix with any non uniform mesh x0"""
x0 = np.array(x0)
N = len(x0) - 1 # The amount of elements; x0, x1, ..., xN
h = x0[1:] - x0[:-1]
a = np.zeros(N+1)
a[0] = 1 #BOUNDARY CONDITIONS
a[1:-1] = 1/h[1:] + 1/h[:-1]
a[-1] = 1/h[-1]
a[N] = 1 #BOUNDARY CONDITIONS
b = -1/h
b[0] = 0 #BOUNDARY CONDITIONS
c = -1/h
c[N-1] = 0 #BOUNDARY CONDITIONS: DIRICHLET
data = [a.tolist(), b.tolist(), c.tolist()]
Positions = [0, 1, -1]
Stiffness_Matrix = diags(data, Positions, (N+1,N+1))
return Stiffness_Matrix
def NodalQuadrature(x0):
"""Finds the Nodal Quadrature Approximation of sin(pi x)"""
x0 = np.array(x0)
h = x0[1:] - x0[:-1]
N = len(x0) - 1
approx = np.zeros(len(x0))
approx[0] = 0 #BOUNDARY CONDITIONS
for i in range(1,N):
approx[i] = math.sin(math.pi*x0[i])
approx[i] = (approx[i]*h[i-1] + approx[i]*h[i])/2
approx[N] = 0 #BOUNDARY CONDITIONS
return approx
def Solver(x0):
Stiff_Matrix = Poisson_Stiffness(x0)
NodalApproximation = NodalQuadrature(x0)
NodalApproximation[0] = 0
U = scipy.sparse.linalg.spsolve(Stiff_Matrix, NodalApproximation)
return U
def Dualsolution(rich_mesh,qoi_rich_node): #BOUNDARY CONDITIONS?
"""Find Z from stiffness matrix Z = K^-1 Q over richer mesh"""
K = Poisson_Stiffness(rich_mesh)
Q = np.zeros(len(rich_mesh))
Q[qoi_rich_node] = 1.0
Z = scipy.sparse.linalg.spsolve(K,Q)
return Z
My error indicator function takes in an approximation Uh, with the mesh it is solved over, and finds eta = (f - Bu)z.
def Error_Indicators(Uh,U_mesh,Z,Z_mesh,f):
"""Take in U, Interpolate to same mesh as Z then solve for eta vector"""
u_inter = interp1d(U_mesh,Uh) #Interpolation of old mesh
U2 = u_inter(Z_mesh) #New function u for the new mesh to use in
Bz = Poisson_Stiffness(Z_mesh)
Bz = Bz.tocsr()
eta = np.empty(len(Z_mesh))
for i in range(len(Z_mesh)):
for j in range(len(Z_mesh)):
eta[i] += (f[i] - Bz[i,j]*U2[j])
for i in range(len(Z)):
eta[i] = eta[i]*Z[i]
return eta
My next function seems to adapt the mesh very well to the given error indicator! Just no idea why the indicator seems to stay the same regardless?
def Mesh_Refinement(base_mesh,tolerance,refinement,z_mesh,QOI_z_mesh):
"""Solve for U on a normal mesh, Take in Z, Find error indicators, adapt. OUTPUT NEW MESH"""
New_mesh = base_mesh
Z = Dualsolution(z_mesh,QOI_z_mesh) #Solve dual solution only once
f = np.empty(len(z_mesh))
for i in range(len(z_mesh)):
f[i] = math.sin(math.pi*z_mesh[i])
U = Solver(New_mesh)
eta = Error_Indicators(U,base_mesh,Z,z_mesh,f)
while max(abs(k) for k in eta) > tolerance:
orderedeta = np.sort(eta) #Sort error indicators LENGTH 40
biggest = np.flipud(orderedeta[int((1-refinement)*len(eta)):len(eta)])
position = np.empty(len(biggest))
ratio = float(len(New_mesh))/float(len(z_mesh))
for i in range(len(biggest)):
position[i] = eta.tolist().index(biggest[i])*ratio #GIVES WHAT NUMBER NODE TO REFINE
refine = np.zeros(len(position))
for i in range(len(position)):
refine[i] = math.floor(position[i])+0.5 #AT WHAT NODE TO PUT NEW ELEMENT 5.5 ETC
refine = np.flipud(sorted(set(refine)))
for i in range(len(refine)):
New_mesh = np.insert(New_mesh,refine[i]+0.5,(New_mesh[refine[i]+0.5]+New_mesh[refine[i]-0.5])/2)
U = Solver(New_mesh)
eta = Error_Indicators(U,New_mesh,Z,z_mesh,f)
print eta
An example input for this would be:
Mesh_Refinement(np.linspace(0,1,3),0.1,0.2,np.linspace(0,1,60),20)
I understand there is alot of code here but i am at a loss, i have no idea where to turn!
Please consider this piece of code from def Error_Indicators:
eta = np.empty(len(Z_mesh))
for i in range(len(Z_mesh)):
for j in range(len(Z_mesh)):
eta[i] = (f[i] - Bz[i,j]*U2[j])
Here you override eta[i] each j iteration, so the inner cycle proves useless and you can go directly to the last possible j. Did you mean to find a sum of the (f[i] - Bz[i,j]*U2[j]) series?
eta = np.empty(len(Z_mesh))
for i in range(len(Z_mesh)):
for j in range(len(Z_mesh)):
eta[i] += (f[i] - Bz[i,j]*U2[j])