Integration using trapezium method SciPy problems ('numpy.ndarray' object is not callable) - python

I'm trying to create a function that lets you input an expression for y with upper and lower bounds and N number of steps. It takes the expression, should integrate it and then run it through the trapezium method for N steps and spit out the result. Currently there is an error ('numpy.ndarray' object is not callable) in my way. Not sure how to fix it! Here's the code!
import numpy as np
import scipy.integrate as integrate
%matplotlib inline
def trap(f,b,a):
return 0.5 * (b-a)*(f(a) + f(b))
def multistep(f,method,b,a,N):
step = (b-a)/N
lower = np.linspace(b,a,N,endpoint=False)
upper = np.linspace(b+step,a,N)
result = 0
for n in range(N):
result += method(f,lower[n],upper[n])
return result
def trapezium_rule_integration_function(f,a,b,N):
x = np.linspace(b,a,N)
yf = []
for p in x:
yf.append(f(p))
y = np.array(yf)
return multistep(y,trap,b,a,N)
y = 1/(1+x**2)
trapezium_rule_integration_function(y,10,0,100)
Thank you in advance for any support available.
Johann Brahms

By looking quickly, I see that this error stems from the function trapezium_rule_integration_function,
y = np.array(yf)
return multistep(y,trap,b,a,N)
The first argument of multistep should be a function, but you provided an array y, which is not callable by '()'. This array is then passed to multistep, which expects a function f instead.
By the way, you should rewrite y as
y = lambda x: 1/(1+x**2)
Best of luck.

Related

Optimization with constrain

I would like to solve a constrained optimization problem.
max {ln (c1) + ln (c2)}
s.t. 4(c1) + 6(c2) ≤ 40
I wrote this code:
import numpy as np
from scipy import optimize
def main():
"""
solving a regular constrained optimization problem
max ln(cons[0]) + ln(cons[1])
st. prices[0]*cons[0] + prices[1]*cons[1] <= I
"""
prices = np.array([4.0, 6.0])
I = 40.0
util = lambda cons: np.dot( np.log(cons)) #define utility function
budget = lambda cons: I - np.dot(prices, cons) #define the budget constraint
initval = 40.0*np.ones(2) #set the initial guess for the algorithm
res = optimize.minimize(lambda x: -util(x), initval, method='slsqp',
constraints={'type':'ineq', 'fun':budget},
tol=1e-9)
assert res['success'] == True
print(res)
Unfortunately, my code don't print any solution. Can you help me figure out why?
Your code yields a TypeError since np.dot expects two arguments, see the definition of your utils function. Hence, use
# is the same as np.dot(np.ones(2), np.log(cons))
utils = lambda cons: np.sum(np.log(cons))
instead.

Numpy only computation of mathematical expression involving a nested sum of functions over the same array

I need help to compute a mathematical expression using only numpy operations. The expression I want to compute is the following :
Where : x is an (N, S) array and f is a numpy function (that can work with broadcastable arrays e.g np.maximum, np.sum, np.prod, ...). If that is of importance, in my case f is a symetric function.
So far my code looks like this:
s = 0
for xp in x: # Loop over N...
s += np.sum(np.prod(f(xp, x), axis=1))
And still has loop that I'd like to get rid of.
Typically N is "large" (around 30k) but S is small (less than 20) so if anyone can find a trick to only loop over S this would still be a major improvement.
I belive the problem is easy by N-plicating the array but one of size (32768, 32768, 20) requires 150Go of RAM that I don't have. However, (32768, 32768) fits in memory though I would appreciate a solution that does not allocate such array.
Maybe a use of np.einsum with well-chosen arrays is possible?
Thanks for your replies. If any information is missing let me know!
Have a nice day !
Edit 1 :
Form of f I'm interested in includes (for now) : f(x, y) = |x - y|, f(x, y) = |x - y|^2, f(x, y) = 2 - max(x, y).
Your loop is very efficient. Some possible ways are
Method-1 (looping over S)
import numpy as np
def f(x,y):
return np.abs(x-y)
N = 200
S = 20
x_data = random.rand(N,S) #(i,s)
y_data = random.rand(N,S) #(i',s)
product = f(broadcast_to(x_data[:,0][...,None],(N,N)) ,broadcast_to(y_data[:,0][...,None],(N,N)).T)
for i in range(1,S):
product *= f(broadcast_to(x_data[:,i][...,None],(N,N)) ,broadcast_to(y_data[:,i][...,None],(N,N)).T)
sum = np.sum(product)
Method-2 (dispatching S number of blocks)
import numpy as np
def f(x,y):
x1 = np.broadcast_to(x[:,None,...],(x.shape[0],y.shape[0],x.shape[1]))
y1 = np.broadcast_to(y[None,...],(x.shape[0],y.shape[0],x.shape[1]))
return np.abs(x1-y1)
def f1(x1,y1):
return np.abs(x1-y1)
N = 5000
S = 20
x_data = np.random.rand(N,S) #(i,s)
y_data = np.random.rand(N,S) #(i',s)
def fun_new(x_data1,y_data1):
s = 0
pp =np.split(x_data1,S,axis=0)
for xp in pp:
s += np.sum(np.prod(f(xp, y_data1), axis=2))
return s
def fun_op(x_data1,y_data1):
s = 0
for xp in x_data1: # Loop over N...
s += np.sum(np.prod(f1(xp, y_data1), axis=1))
return s
fun_new(x_data,y_data)

curve_fit making called func raise an IndexError

I'm trying to fit a parameter eta_H in function TGp_xx to some data (x_data, data_num_xx) using curve_fit. Now, the code below is a reduced version of what I'm using and it won't work by itself, but I hope the issue is conceptual enough to be understandable even from this
from scipy.optimize import curve_fit
Lx = 150
y_cut = 20
data = np.loadtxt("../dump/results.dat")
ux = data[:,3]
ux = np.reshape(ux , (Ly, Lx))
def Par_x(x,y,vec):
fdx = vec[(x+1)%Lx , y]
fsx = vec[(x-1+Lx)%Lx , y]
return (fdx - fsx) / 2.0
def TGp_xx(x, eta_H): return 2*eta_H*Par_x(x,y_cut,ux)
x_data = np.arange(Lx, dtype=np.int)
data_num_xx = np.empty(Lx, dtype='float64') #this is just a placeholder
popt_xx, pcov_xx = curve_fit(TGp_xx, x_data, data_num_xx)
I get an IndexError raised within Par_x:
fdx = vec[(x+1)%Lx , y]
IndexError: arrays used as indices must be of integer (or boolean) type
I tried something simpler like calling TGp_xx(x_data, some_constant) outside curve_fit, and it works. I don't really get why inside curve_fit i get the IndexError, as if I'm passing a float value (or an array of floats) as x, that can't be used as an index.

Jacobi Method & Basic Matrix Math using NUMPY

I'm getting an import error for "norm". What am I not doing correct??
I'm open to constructive feedback on improving the code, however I have to keep the parameters as they are!
Thanks!!!
Code is below:
import numpy as np
from numpy import norm, inalg, array, zeros, diag, diagflat, dot, linalg
"""Test Case Data"""
A = np.matrix([[4,-1,-1],[-2,6,1],[-1,1,7]])
b = np.matrix([[3],[9],[-6]])
x = np.matrix([[0],[0],[0]])
"""Main Function"""
def jacobi(A, b, x, Tolerance, Iterations):
V = np.diag(A)
D = np.diag(V)
R = D-A
D_I = D.I
D = np.asmatrix(D)
Counter_1 = 1
tol_gauge = 100
while Counter_1 <= Iterations:
# I considered using the "dot" function in NUMPY but I was wary of mixed results
iterative_approach_form = D_I * ((R*x)+b)
tol_gauge = np.linalg.norm(iterative_approach_form-x)
x = iterative_approach_form
if initial_tol <= Tolerance:
return("The Solution x = {},y={}, z={} ".format(x[0], x[1], x[2]))
return("The Solution was found in %s interation(s)" %(Counter_1))
else:
pass
Counter_1 +=1
return("The Solution was not found in {} iteration(s)".format(Iterations))
You need to specify which numpy module you are importing from. The following works if you want to use a function only by its name:
from numpy import linalg
from numpy.linalg import norm
from numpy import zeros, array, diag, diagflat, dot
Looking at you code however, you don't need the second import line, because in the rest of the code the numpy functions are specified according to the accepted norm. For example, norm is already present in your code as np.linalg.norm.
There are three more issues with your code: 1) initial_tol is not assigned a value; 2) tol_gauge is assigned but not used in the code; 3) the last return statement is not indented properly (perhaps only here) and the same is very likely for the block in your while loop.

odeint for an differential system

I have a problem with odeint. I have to solve an first order differential system and then a second order system but I am a little confused with the first order one. Can you explain what I have marked as wrong? Thank you :)
import scipy.integrate as integrate
import numpy as np
def fun(t,y):
ys = np.array([y[1], (1-y[0]**2)*y[1]-y[0]])
return(ys)
N = 3
x0 = np.array([2.00861986087484313650940188,0])
t0tf = [0, 17.0652165601579625588917206249]
T=([0 for i in range (N+1)])
T[0]= t0tf[0]
Pas = (t0tf[1]-t0tf[0])/N
for i in range (1,N+1):
T[i]= t0tf[0] + i*Pas
X = integrate.odeint(fun, x0,T,Dfun=None, col_deriv=0,full_output=True)
T = np.array(T)
T = T.reshape(N+1,1)
S = np.append(X,T,axis=1)
print(S)
The returned error is:
ys = np.array([y[1], (1-y[0]**2)*y[1]-y[0]])
TypeError: 'float' object is not subscriptable
You need to reverse the order of the arguments to your derivative function - it should be f(y, t), not f(t, y). This is the opposite order to that used by the scipy.integrate.ode class.
Also, the concatenation S = np.append(X,T,axis=1) will fail because X is a tuple containing your integrals and a dict. Use S = np.append(X[0],T,axis=1) instead.

Categories

Resources