How to convert pine script stdev to python code? - python
I'm trying to convert pine script stdev to python code but it seems I'm doing it wrong
https://www.tradingview.com/pine-script-reference/v4/#fun_stdev
Pine script:
//the same on pine
isZero(val, eps) => abs(val) <= eps
SUM(fst, snd) =>
EPS = 1e-10
res = fst + snd
if isZero(res, EPS)
res := 0
else
if not isZero(res, 1e-4)
res := res
else
15
pine_stdev(src, length) =>
avg = sma(src, length)
sumOfSquareDeviations = 0.0
for i = 0 to length - 1
sum = SUM(src[i], -avg)
sumOfSquareDeviations := sumOfSquareDeviations + sum * sum
stdev = sqrt(sumOfSquareDeviations / length)
Python code:
import talib as ta
def isZero(val, eps):
if abs(val) <= eps:
return True
else:
return False
def SUM(fst, snd):
EPS = 1e-10
res = fst + snd
if isZero(res, EPS):
res += 0
else:
if not isZero(res, 1e-4):
res = res
else:
res = 15
return res
def pine_stdev(src, length):
avg = ta.SMA(src, length)
sumOfSquareDeviations = 0.0
for i in range(length - 1):
s = SUM(src.iloc[i], -avg.iloc[i])
sumOfSquareDeviations = sumOfSquareDeviations + s * s
stdev = (sumOfSquareDeviations / length)*(sumOfSquareDeviations / length)
What am I doing wrong? And why SUM function returns 15?
Trading View has made a mistake in the code on the site.
The number "15" should be written as "1e-5".
You can use this code:
def SUM(fst, snd):
EPS = 1e-10
res = fst + snd
if isZero(res, EPS):
res = 0
else:
if not isZero(res, 1e-4):
res = res
else:
res = 1e-5
return res
Related
Maximum Likelihood Estimation- Replicating from R to Python
I am trying to replicate the below R code to estimate parameters using Maximum Likelihood method in Python. I want to obtain the same results using both the codes, but my estimated values are different, I am not sure if both the codes are optimising the same parameters. R-Code: ll <- function (prop, numerator, denominator) { return( lgamma(denominator + 1) - lgamma( numerator + 1) - lgamma(denominator - numerator + 1) + numerator * log(prop) + (denominator - numerator) * log(1 - prop) ) } compLogLike <- function(pvec){ return(sum(ll(pvec, dat$C, dat$N))) } fct_p_ll <- function(a, c0, c1){ xa_ <- exp(c0 + c1*c(20, a)) return(1 - exp((xa_[1]-xa_[2])/c1)) } fct_ll <- function(x){ pv <- sapply(22.5+5*(0:8), FUN = fct_p_ll, c0 = x[1], c1 = x[2]) return(compLogLike(pv)) } opt.res <- optim(par = c(-9.2, 0.07), fn = fct_ll, control = list(fnscale = -1.0), hessian = TRUE) fisherInfo <- solve(-opt.res$hessian) propSigma <- sqrt(diag(fisherInfo)) upper <- opt.res$par+1.96*propSigma lower <- opt.res$par-1.96*propSigma interval <- data.frame(val = opt.res$par, ci.low=lower, ci.up = upper) Python Code: def ll(prop, numerator, denominator): print(prop, numerator, denominator) if prop > 0: value = (math.lgamma(denominator + 1) - math.lgamma( numerator + 1) - math.lgamma(denominator - numerator + 1) + numerator * math.log(prop) + (denominator - numerator) * math.log(1 - prop)) return value return 0 def compLogLike(pvec): p = list(pvec) c = list(df["C"]) n = list(df["N"]) compLog = 0 for idx, val in enumerate(p): compLog += ll(p[idx],c[idx],n[idx]) print(compLog) return compLog def fct_p_ll(a,c0,c1): val_list = [c1 * val for val in [20, a]] xa_ = np.exp([c0 + val for val in val_list]) return 1 - np.exp((xa_[0] -xa_[1])/c1) def fct_ll(x): ages_1 = np.arange(22.5, 67.5 , 5) pv = [fct_p_ll(a=val,c0=x[0],c1=x[1]) for val in ages_1] return compLogLike(pv) opt = minimize(fct_ll, [-9.2, 0.07], method='Nelder-Mead', hess=Hessian(lambda x: fct_ll(x,a))) Any inputs would be really helpful.
SciPy Optimise minimise error - challenge to solve
How do I solve this error? TypeError: NumPy boolean subtract, the `-` operator, is not supported, use the bitwise_xor, the `^` operator, or the logical_xor function instead. I have programmed an optimizing program that must minimize the cost of a wall design. The wall is based on 3 parameters, x, k and m. There are constraints to the sizes of x, k and m as shown. Another constraint is that z (or deflection) must be kept under 100mm. The equation for deflection changes based on a certain t (or time) at which the blast wall is experiencing the blast. If t is below a certain time value which is calculated dependent on, x, k and m the equation is as shown. If t is above the same certain time value, the equation for z changes. Here is the programming... Please help many thanks :) import numpy as np from numpy import linspace from math import cos from math import sin from scipy.optimize import minimize #Function for minimising def calcCost(c): k = c[0] m = c[1] x = c[2] Cost = (900 + 825*k**2 - 1725) + (10*m - 200) + ((2400*x**2)/4) return Cost #Objective function def objective(c): return calcCost(c) #Defining Variables def calck(c): k = c[0] k=k k.resize(12,) return k def calcm(c): m = c[1] m=m m.resize(12,) return m def calcx(c): x = c[2] x=x x.resize(12,) return x def calcz(c): k = c[0] x = c[1] m = c[2] l = linspace(0,140,141) for t in l: if t <= ((20 - 0.12*x**2 + 4.2*x)/1000): deflection = ((((1000+9*x**2-183*x)*1000)/k)*(1-cos(t*((k/m)**0.5))) + (((1000+9*x**2-183*x)*1000)/k*((20 - 0.12*x**2 + 4.2*x)/1000))*((sin(t*((k/m)**0.5))/((k/m)**0.5))-t))*1000 else: deflection = ((((1000+9*x**2-183*x)*1000)/(k*((k/m)**0.5)*((20 - 0.12*x**2 + 4.2*x)/1000)))*(sin(((k/m)**0.5)*t))-(sin(((k/m)**0.5)*(t-((20 - 0.12*x**2 + 4.2*x)/1000))))-(((1000+9*x**2-183*x)*1000)/k)*cos(((k/m)**0.5)*t))*1000 deflection.resize(12,) return deflection #Constraint functions def kconstraint1(c): k = c[0] return k-(1*10**6) >= 0 def kconstraint2(c): k = c[0] return k-(7*10**6) <= 0 def mconstraint1(c): m = c[0] return m-200 >= 0 def mconstraint2(c): m = c[0] return m-1200 <= 0 def xconstraint1(c): x = c[0] return x >= 0 def xconstraint2(c): x = c[0] return x <= 10 def zconstraint1(c): k = c[0] x = c[1] m = c[2] l = linspace(0,140,141) for t in l: if t <= ((20 - 0.12*x**2 + 4.2*x)/1000): deflection = ((((1000+9*x**2-183*x)*1000)/k)*(1-cos(t*((k/m)**0.5))) + (((1000+9*x**2-183*x)*1000)/k*((20 - 0.12*x**2 + 4.2*x)/1000))*((sin(t*((k/m)**0.5))/((k/m)**0.5))-t))*1000 else: deflection = ((((1000+9*x**2-183*x)*1000)/(k*((k/m)**0.5)*((20 - 0.12*x**2 + 4.2*x)/1000)))*(sin(((k/m)**0.5)*t))-(sin(((k/m)**0.5)*(t-((20 - 0.12*x**2 + 4.2*x)/1000))))-(((1000+9*x**2-183*x)*1000)/k)*cos(((k/m)**0.5)*t))*1000 return deflection <= 99.99999999 b = (0.5,1) be = (0.5,10) bb = (0.1,2.0) bnds = (b,be,bb,bb) con1 = ({'type':'ineq','fun':kconstraint1}) con2 = ({'type':'ineq','fun':kconstraint2}) con3 = ({'type':'ineq','fun':mconstraint1}) con4 = ({'type':'ineq','fun':mconstraint2}) con5 = ({'type':'ineq','fun':xconstraint1}) con6 = ({'type':'ineq','fun':xconstraint2}) con7 = ({'type':'ineq','fun':zconstraint1}) cons = [con1,con2,con3,con4,con5,con6,con7] xGUESS = 5 kGUESS = 3*10**6 mGUESS = 700 zGUESS = 90 x0 = np.array([xGUESS,kGUESS,mGUESS,zGUESS]) sol = minimize(objective,x0,method='SLSQP',bounds=bnds,constraints=cons,options={'disp':True}) xOpt = sol.x CostOPT = sol.fun kOPT = calck(xOpt) xOPT = calcx(xOpt) mOPT = calcm(xOpt) zOPT = calcz(xOpt) print(str(CostOPT)) print(str(calcx)) print(str(calcm)) print(str(calck)) print(str(calcz))
Trust-Region Dogleg Method for Nonlinear Equations
Hi I am trying to write a trust-region algorithm using the dogleg method with python for a class I have. I have a Newton's Method algorithm and Broyden's Method algorthm that agree with each other but I can't seem to get this Dogleg method to work. Here is the function I am trying to find the solution to: def test_function(x): x1 = float(x[0]) x2 = float(x[1]) r = np.array([[x2**2 - 1], [np.sin(x1) - x2]]) return r and here is the jacobian I wrote def Test_Jacobian(x, size): e = create_ID_vec(size) #print(e[0]) epsilon = 10e-8 J = np.zeros([size,size]) #print (J) for i in range(0, size): for j in range(0, size): J[i][j] = ((test_function(x[i]*e[j] + epsilon*e[j])[i] - test_function(x[i]*e[j])[i])/epsilon) return J and here is my Trust-Region algorithm: def Trust_Region(x): trust_radius = 1 max_trust = 300 eta = rand.uniform(0,.25) r = test_function(x) # change to correspond with the function you want J = Test_Jacobian(r, r.size) # change to correspond with function i = 0 iteration_table = [i] function_table = [vector_norm(r, r.size)] while vector_norm(r, r.size) > 10e-10: print(x, 'at iteration', i, "norm of r is", vector_norm(r, r.size)) p = dogleg(x, r, J, trust_radius) rho = ratio(x, J, p) if rho < 0.25: print('first') trust_radius = 0.25*vector_norm(p,p.size) elif rho > 0.75 and vector_norm(p,p.size) == trust_radius: print('second') trust_radius = min(2*trust_radius, max_trust) else: print('third') trust_radius = trust_radius if rho > eta: print('x changed') x = x + p #r = test_function(x) #J = Test_Jacobian(r, r.size) else: print('x did not change') x = x r = test_function(x) # change to correspond with the function you want J = Test_Jacobian(r, r.size) # change to correspond with function i = i + 1 #print(r) #print(J) #print(vector_norm(p,p.size)) print(rho) #print(trust_radius) iteration_table.append(i) function_table.append(vector_norm(r,r.size)) print ('The solution to the non-linear equation is: ', x) print ('This solution was obtained in ', i, 'iteratations') plt.figure(figsize=(10,10)) plt.plot(iteration_table, np.log10(function_table)) plt.xlabel('iteration number') plt.ylabel('function value') plt.title('Semi-Log Plot for Convergence') return x, iteration_table, function_table def dogleg(x, r, J, trust_radius): tau_k = min(1, vector_norm(J.transpose().dot(r), r.size)**3/(trust_radius*r.transpose().dot(J).dot(J.transpose().dot(J)).dot(J.transpose()).dot(r))) p_c = -tau_k*(trust_radius/vector_norm(J.transpose().dot(r), r.size))*J.transpose().dot(r) if vector_norm(p_c, p_c.size) == trust_radius: print('using p_c') p_k = p_c else: p_j = -np.linalg.inv(J.transpose().dot(J)).dot(J.transpose().dot(r)) print ('using p_j') tau = tau_finder(x, p_c, p_j, trust_radius, r.size) p_k = p_c + tau*(p_j-p_c) return p_k def ratio(x, J, p): r = test_function(x) r_p = test_function(x + p) print (vector_norm(r, r.size)**2) print (vector_norm(r_p, r_p.size)**2) print (vector_norm(r + J.dot(p), r.size)**2) rho_k =(vector_norm(r, r.size)**2 - vector_norm(r_p, r_p.size)**2)/(vector_norm(r, r.size)**2 - vector_norm(r + J.dot(p), r.size)**2) return rho_k def tau_finder(x, p_c, p_j, trust_radius, size): a = 0 b = 0 c = 0 for i in range(0, size): a = a + (p_j[i] - p_c[i])**2 b = b + 2*(p_j[i] - p_c[i])*(p_c[i] - x[i]) c = (p_c[i] - x[i])**2 c = c - trust_radius**2 tau_p = (-b + np.sqrt(b**2 - 4*a*c))/(2*a) tau_m = (-b - np.sqrt(b**2 - 4*a*c))/(2*a) #print(tau_p) #print(tau_m) if tau_p <= 1 and tau_p >=0: return tau_p elif tau_m <= 1 and tau_m >=0: return tau_m else: print('error') return 'error' def model_function(p): r = test_function(x) J = Test_Jacobian(r, r.size) return 0.5*vector_norm(r + J.dot(p), r.size)**2 The answer should be about [[1.57076525], [1. ]] but here is the output after about 28-30 iterations: ZeroDivisionError Traceback (most recent call last) <ipython-input-359-a414711a1671> in <module> 1 x = create_point(2,1) ----> 2 Trust_Region(x) <ipython-input-358-7cb77bd44d7b> in Trust_Region(x) 11 print(x, 'at iteration', i, "norm of r is", vector_norm(r, r.size)) 12 p = dogleg(x, r, J, trust_radius) ---> 13 rho = ratio(x, J, p) 14 15 if rho < 0.25: <ipython-input-358-7cb77bd44d7b> in ratio(x, J, p) 71 print (vector_norm(r_p, r_p.size)**2) 72 print (vector_norm(r + J.dot(p), r.size)**2) ---> 73 rho_k =(vector_norm(r, r.size)**2 - vector_norm(r_p, r_p.size)**2)/(vector_norm(r, r.size)**2 - vector_norm(r + J.dot(p), r.size)**2) 74 return rho_k 75 ZeroDivisionError: float division by zero
Converting floats in Python to Scientific Numbers
So I have an application in Python that calculates the variable number in the "PV = nRT" chemical equation. The code is like this: r = 0.082 # Variables p = float(input('Pressure = ')) p_unit = input('Unit = ') print('_____________________') v = float(input('Volume = ')) v_unit = input('Unit = ') print('_____________________') n = float(input('Moles = ')) print('_____________________') t = float(input('Temperature = ')) t_unit = input('Unit = ') # Unit Conversion if p_unit == 'bar': p = p * 0.987 if v_unit == 'cm3': v = v / 1000 if v_unit == 'm3': v = v * 1000 if t_unit == 'c': t = t + 273 if t_unit == 'f': t = ((t - 32) * (5 / 9)) + 273 # Solve Equation def calc(): if p == 000: return (n * r * t) / v if v == 000: return (n * r * t) / p if n == 000: return (p * v) / (r * t) if t == 000: return (p * v) / (n * r) and then at the end I run the function to get the result. But the problem is I want to convert the result to a Scientific Number (e.g. 0.005 = 5 x 10^-3). I tried the solution below: def conv_to_sci(num): i = 0 if num > 10: while num > 10: num / 10 i = i - 1 if num < 10: while num < 10: num * 10 i = i + 1 return num + "x 10^" + i but it didn't work. Any questions?
I'd just use numpy to get scientific notation import numpy as np num = 0.005 num_sc = np.format_float_scientific(num) >>> num_sc '5.e-03'
Use str.format "{:.0e}".format(0.005) This will print: '5e-03' Or, def conv_to_sci(num): i = 0 while int(num) != num: num *= 10 i += 1 return "{0} x 10^{1}".format(int(num), i) conv_to_sci(0.005) Will give: '5 x 10^3'
Maximum Recursion Occurs, what went wrong in Keller Box?
This is the non syntax error code, but i cant seem to fix the recursion error. need some help here. the algorithm based on matlab, i've read the tutorial on matlab but i can seem to figure out which part did i miss. import numpy as npy blt = int(raw_input("Input the boundary layer thickness = ")) deleta = float(raw_input("Input the step size of boundary layer thickness = ")) np = int((blt/deleta) + 1) stop = 1 k=1 l=2 g=2 eselon = 0.00001 def eta(j,k): if j == 1 and k == 1: return 0 else: return eta(j-1,k) + deleta; deta = deleta def etanp(): return eta(np,k) def f(j,k): return -eta(np,1)*etab2 + etanp()*etab1 + eta(j,1) def u(j,k): return -3*etab1 + 2*etab +1 def v(j,k): return (-6/etanp()*etab + (2/etanp())) def fb(j,k): return 0.5 * (f(j,k) + f(j-1,k)) def ub(j,k): return 0.5 * (u(j,k) + u(j-1,k)) def vb(j,k): return 0.5*(v(j,k) + v(j-1,k)) def fvb(j,k): return fb(j,k)*vb(j,k) def uub(j,k): return ub(j,k) * ub(j,k) def a1(j,k): return 1 + 0.5*deta *fb(j,k) def a2(j,k): return -1 + 0.5*deta *fb(j,k) def a3(j,k): return 0.5 * deta * vb(j,k) def a4(j,k): return a3(j,k) def a5(j,k): return -1 * deta * ub(j,k) def a6(j,k): return a5(j,k) def r1(j,k): return f(j-1,k) - f(j,k) + deta * ub(j,k) def r2(j,k): return u(j-1,k) - u(j,k) + deta * vb(j,k) def r3(j,k): return v(j-1,k)-v(j,k) - deta *((fvb(j,k)*uub(j,k)))-deta def AJ(j,k): if j == 2: return npy.matrix([[0,1,0],[-0.5*deta,0,-0.5*deta],[a2(2,k), a3(2,k), a1(2,k)]]) else: return npy.matrix([[-0.5*deta,0,0],[-1,0,-0.5*deta],[a6(j,k),a3(j,k),a1(j,k)]]) def BJ(j,k): return npy.matrix([[0,-1,0],[0,0,-0.5*deta],[0,a4(j,k),a2(j,k)]]) def CJ(j,k): return npy.matrix([[-0.5*deta,0,0],[1,0,0],[a5(j,k),0,0]]) def alfa(j,k): if j == 2: return AJ(2,k) else: return AJ(j,k) - (BJ(j,k)*gamma(j-1,k)) def gamma(j,k): if j == 2: return npy.matrix.I((alfa(2,k))*CJ(2,k)) else: return npy.matrix.I((alfa(j,k))*CJ(j,k)) def rr(j,k): return npy.matrix([[r1(j,k)],[r2(j,k)],[r3(j,k)]]) def ww(j,k): if j == 2: return npy.matrix.I(AJ(2,k)*rr(2,k)) else: return npy.matrix.I((alfa(j,k))*(rr(j,k)-(BJ(j,k)*ww(j-1,k)))) def dell(j,k): if j == np: return ww(np,k) else: return ww(j,k) - (gamma(j,k)*dell(j+1,k)) def delf(j,k): if j == 1: return 0 elif j == 2: return dell(2,k)[1,0] else: return dell(j,k) def delu(j,k): if j == 1 or j == np: return 0 elif j == np-1: return dell(j,k)[0,0] def delv(j,k): if j == 1: return dell(2,k)[0,0] elif j == 2: return dell(2,k)[2,0] else: return dell(j,k)[2,0] def ffinal(j,l): return f(j,k) + delf(j,k) def ufinal(j,l): return u(j,k) + delu(j,k) def vfinal(j,l): return v(j,k) + delv(j,k) # Beginning of calculation for Keller-Box while stop > eselon: eta(1,1) for j in range (2,np): eta(j,k) # Initial condition etab = eta(j,k) / eta(np,k) etab1 = etab**2 etab2 = etab**3 for j in range (1,np): deta f(j,1) u(j,1) v(j,1) # Current value of Central Differentiation for j in range (2,np): fb(j,k) ub(j,k) vb(j,k) fvb(j,k) uub(j,k) a1(j,k) a2(j,k) a3(j,k) r1(j,k) r2(j,k) r3(j,k) # Matrices Value for A1, Aj, Bj, and CJ CJ(j,k) AJ(j,k) BJ(j,k) # Recursion: Forward Sweeping for j in range (3,np): alfa(j,k) gamma(j,k) for j in range(2,np): rr(j,k) for j in range(3,np): ww(j,k) # Recursion: Backward Sweeping for j in range (np-1,2,-1): dell(j,k) for j in range (np,3,-1): delu(j-1,k) delf(j,k) delv(j,k) # Newton's Method for j in range (1,np): ffinal(j,l) ufinal(j,l) vfinal(j,l) # Check the convergence of iteration stop = npy.abs(delv(1,k)) kmax = k k =+ 1 cfrex = vfinal(1,kmax) print cfrex Here's the referential that i used from mathlab ******************************************************************* Input ******************************************************************* blt = input ('Input the boundary layer thickness = '); deleta=0.1; %input('Input the step size of boundary layer thickness='); np = (blt / deleta)+ 1; pr = 7; %input ('Input the prandtl number = '); K = 0; %input ('Input the material parameter K = '); lambda = 1; %input ('Input the mixed convection parameter = '); stop = 1.0; k = 1; eselon = 0.00001; while stop > eselon eta(1,1) = 0.0; for j = 2:np eta(j,1) = eta(j-1,1) + deleta; end ******************************************************************* Initial Condition for f, u, v, h, p, s, t ******************************************************************* etanpq = eta(np,1)/3; etau15 = 1/eta(np,1); etau16 = 2/eta(np,1); etanp = eta(np,1); for j =1:np deta(j,k)= deleta; etab = eta(j,1)/eta(np,1); etab1 = etab^2; etab2 = etab^3; etau152 = etau15^2; etau162 = etau16^2; f(j,1) = -etanpq * etab2 + etanp * etab1; u(j,1) = -etab1 + 2 * etab; v(j,1) = -etau16 * etab + etau16; h(j,1) = etau15 * etab - etau15; p(j,1) = etau152; s(j,1) = -eta(j,1) + eta(j,1) * etab; t(j,1) = -1 + 2 * etab; end ******************************************************************* Current Central Differention Value ******************************************************************* for j = 2:np fb(j,k) = 0.5*(f(j,k) + f(j-1,k)); ub(j,k) = 0.5*(u(j,k) + u(j-1,k)); vb(j,k) = 0.5*(v(j,k) + v(j-1,k)); hb(j,k) = 0.5*(h(j,k) + h(j-1,k)); pb(j,k) = 0.5*(p(j,k) + p(j-1,k)); sb(j,k) = 0.5*(s(j,k) + s(j-1,k)); tb(j,k) = 0.5*(t(j,k) + t(j-1,k)); fvb(j,k) = fb(j,k) * vb(j,k); uub(j,k) = ub(j,k) ^ 2; pfb(j,k) = pb(j,k) * fb(j,k); hub(j,k) = hb(j,k) * ub(j,k); tfb(j,k) = tb(j,k) * fb(j,k); sub(j,k) = sb(j,k) * ub(j,k); ******************************************************************* Momentum Differential Equation ******************************************************************* a1(j,k) = (1.0 + K) + 0.5 * deta(j,k) * fb(j,k); a2(j,k) = -(1.0 + K) + 0.5 * deta(j,k) * fb(j,k); a3(j,k) = 0.5 * deta(j,k) * vb(j,k); a4(j,k) = a3(j,k); a5(j,k) = -1 * deta(j,k) * ub(j,k); a6(j,k) = a5(j,k); a7(j,k) = 0.5 * K * deta(j,k); a8(j,k) = a7(j,k); a9(j,k) = 0.5 * lambda * deta(j,k); a10(j,k) = a9(j,k); ******************************************************************* Angel Differential ******************************************************************* b1(j,k) = (1 + K/2) + 0.5 * deta(j,k) * fb(j,k); b2(j,k) = -(1 + K/2) + 0.5 * deta(j,k) * fb(j,k); b3(j,k) = 0.5 * deta(j,k) * pb(j,k); b4(j,k) = b3(j,k); b5(j,k) = -0.5 * deta(j,k) * hb(j,k); b6(j,k) = b5(j,k); b7(j,k) = -0.5 * deta(j,k) * ub(j,k) - K * deta(j,k); b8(j,k) = b7(j,k); b9(j,k) = -0.5 * K * deta(j,k); b10(j,k) = b9(j,k); ******************************************************************* Energy Differential ******************************************************************* c1(j,k) = 1/pr + 0.5 * deta(j,k) * fb(j,k); c2(j,k) = -1/pr + 0.5 * deta(j,k) * fb(j,k); c3(j,k) = 0.5 * deta(j,k) * tb(j,k); c4(j,k) = c3(j,k); c5(j,k) = -0.5 * deta(j,k) * sb(j,k); c6(j,k) = c5(j,k); c7(j,k) = -0.5 * deta(j,k) * ub(j,k); c8(j,k) = c7(j,k); ******************************************************************* Definition value of rj-1/2 ******************************************************************* r1(j,k) = f(j-1,k) - f(j,k) + deta(j,k) * ub(j,k); r2(j,k) = u(j-1,k) - u(j,k) + deta(j,k) * vb(j,k); r3(j,k) = h(j-1,k) - h(j,k) + deta(j,k) * pb(j,k); r4(j,k) = s(j-1,k) - s(j,k) + deta(j,k) * tb(j,k); r5(j,k) = (1.0 + K) * (v(j-1,k) - v(j,k)) - deta(j,k) * fvb(j,k) -... deta(j,k)+ deta(j,k) * uub(j,k) - K * deta(j,k)... * pb(j,k) - lambda * deta(j,k) * sb(j,k); r6(j,k) = (1 + K/2) * (p(j-1,k) - p(j,k)) - deta(j,k) * pfb(j,k) + ... deta(j,k) * hub(j,k) + 2 * K * deta(j,k) * hb(j,k) + ... K * deta(j,k) * vb(j,k); r7(j,k) = 1/pr * (t(j-1,k) - t(j,k)) - deta(j,k) * tfb(j,k) +... deta(j,k) * sub(j,k); end ******************************************************************* Matrices Value A1, Aj, Bj, Cj ******************************************************************* a{2,k} = [0 0 0 1 0 0 0 ;... -0.5 * deta(2,k) 0 0 0 -0.5 * deta(2,k) 0 0;... 0 -0.5 * deta(2,k) 0 0 0 -0.5 * deta(2,k) 0;... 0 0 -1 0 0 0 -0.5 * deta(2,k);... a2(2,k) a8(2,k) a10(2,k) a3(2,k) a1(2,k) a7(2,k) 0;... b10(2,k) b2(2,k) 0 b3(2,k) b9(2,k) b1(2,k) 0;... 0 0 c8(2,k) c3(2,k) 0 0 c1(2,k)]; for j = 3:np a{j,k} = [-0.5 * deta(j,k) 0 0 1 0 0 0 ;... -1 0 0 0 -0.5 * deta(j,k) 0 0 ;... 0 -1 0 0 0 -0.5 * deta(j,k) 0 ;... 0 0 -1 0 0 0 -0.5 * deta(j,k);... a6(j,k) 0 a10(j,k) a3(j,k) a1(j,k) a7(j,k) 0;... b6(j,k) b8(j,k) 0 b3(j,k) b9(j,k) b1(j,k) 0;... c6(j,k) 0 c8(j,k) c3(j,k) 0 0 c1(j,k)]; b{j,k} = [0 0 0 -1 0 0 0 ;... 0 0 0 0 -0.5 * deta(j,k) 0 0;... 0 0 0 0 0 -0.5 * deta(j,k) 0;... 0 0 0 0 0 0 -0.5 * deta(j,k);... 0 0 0 a4(j,k) a2(j,k) a8(j,k) 0;... 0 0 0 b4(j,k) b10(j,k) b2(j,k) 0;... 0 0 0 c4(j,k) 0 0 c2(j,k)]; end for j = 2:np c{j,k} = [-0.5 * deta(j,k) 0 0 0 0 0 0 ;... 1 0 0 0 0 0 0;... 0 1 0 0 0 0 0;... 0 0 1 0 0 0 0;... a5(j,k) 0 a9(j,k) 0 0 0 0;... b5(j,k) b7(j,k) 0 0 0 0 0;... c5(j,k) 0 c7(j,k) 0 0 0 0]; end ******************************************************************* Recursion of block Elimination ******************************************************************* Forward Sweeping ******************************************************************* alfa{2,k} = a{2,k}; gamma{2,k} = inv(alfa{2,k}) * c{2,k}; for j = 3:np alfa{j,k} = a{j,k} - (b{j,k} * gamma{j-1,k}); gamma{j,k} = inv(alfa{j,k}) * c{j,k}; end for j = 2:np rr{j,k} = [r1(j,k); r2(j,k); r3(j,k); r4(j,k); r5(j,k); r6(j,k);... r7(j,k)]; end ww{2,k} = inv(alfa{2,k}) * rr{2,k}; for j = 3:np ww{j,k} = inv(alfa{j,k}) * (rr{j,k} - (b{j,k} * ww{j-1,k})); end ******************************************************************* Backward Sweeping ******************************************************************* delf(1,k) = 0.0; delu(1,k) = 0.0; delh(1,k) = 0.0; delt(1,k) = 0.0; delu(np,k) = 0.0; delh(np,k) = 0.0; dels(np,k) = 0.0; dell{np,k} = ww{np,k}; for j = np-1:-1:2 dell{j,k} = ww{j,k} -(gamma{j,k} * dell{j+1,k}); end delv(1,k) = dell{2,k}(1,1); delp(1,k) = dell{2,k}(2,1); dels(1,k) = dell{2,k}(3,1); delf(2,k) = dell{2,k}(4,1); delv(2,k) = dell{2,k}(5,1); delp(2,k) = dell{2,k}(6,1); delt(2,k) = dell{2,k}(7,1); for j = np:-1:3 delu(j-1,k) = dell{j,k}(1,1); delh(j-1,k) = dell{j,k}(2,1); dels(j-1,k) = dell{j,k}(3,1); delf(j,k) = dell{j,k}(4,1); delv(j,k) = dell{j,k}(5,1); delp(j,k) = dell{j,k}(6,1); delt(j,k) = dell{j,k}(7,1); end ******************************************************************* Newton method ******************************************************************* for j = 1:np f(j,k+1) = f(j,k) + delf(j,k); u(j,k+1) = u(j,k) + delu(j,k); v(j,k+1) = v(j,k) + delv(j,k); h(j,k+1) = h(j,k) + delh(j,k); p(j,k+1) = p(j,k) + delp(j,k); s(j,k+1) = s(j,k) + dels(j,k); t(j,k+1) = t(j,k) + delt(j,k); h(j,k+1) = -0.5 * v(j,k+1); end ******************************************************************* Convergence Check ******************************************************************* stop = abs(delv(1,k)); kmax = k; k = k + 1; end ******************************************************************* Skin Friction and Nusselt Number ******************************************************************* cfrex = v(1,kmax) nuxrex = 1/s(1,kmax) nuxrex2 = s(1,kmax) however, my case of study covers only the f, u and v, therefore, there are some changes on the initial condition. and many others part.
The infinite recursion will occur in these two functions: def alfa(j,k): return AJ(j,k) - (BJ(j,k)*gamma(j,k)) def gamma(j,k): return npy.matrix.I((alfa(g,k))*CJ(g,k)) Each of which calls the other one under any circumstances. So, alfa calls gamma which calls alfa which calls gamma and so on forever. We can see this if we add a print statement: def alfa(j,k): print 'alfa({},{}) called'.format(j,k) return AJ(j,k) - (BJ(j,k)*gamma(j,k)) def gamma(j,k): print 'gamma({},{}) called'.format(j,k) return npy.matrix.I((alfa(g,k))*CJ(g,k)) Then we can see that this is what happens: In [199]: alfa(3,1) alfa(3,1) called gamma(3,1) called alfa(2,1) called gamma(2,1) called alfa(2,1) called gamma(2,1) called alfa(2,1) called gamma(2,1) called alfa(2,1) called gamma(2,1) called alfa(2,1) called gamma(2,1) called alfa(2,1) called gamma(2,1) called alfa(2,1) called gamma(2,1) called alfa(2,1) called gamma(2,1) called alfa(2,1) called gamma(2,1) called alfa(2,1) called gamma(2,1) called alfa(2,1) called gamma(2,1) called alfa(2,1) called gamma(2,1) called ... and so on