I'm trying to speed up this python function:
def twoFreq_orig(z, source_z, num, den, matrix, e):
Z1, Z2 = np.meshgrid(source_z, np.conj(z))
Z1 **= num
Z2 **= den - 1
M = (e ** ((num + den - 2) / 2.0)) * Z1 * Z2
return np.sum(matrix * M, 1)
where z and source_z are np.ndarray (1d, dtype=np.complex128), num and den are np.ndarray (2d, dtype=np.float64), matrix is a np.ndarray (2d, dtype=np.complex128) and e is a np.float64.
I don't have much experience with Numba, but after reading some tutorials, I came up with this implementation:
#nb.jit(nb.f8[:](nb.c16[:], nb.c16[:], nb.f8[:, :], nb.f8[:, :], nb.c16[:, :], nb.f8))
def twoFreq(z, source_z, num, den, matrix, e):
N1, N2 = len(z), len(source_z)
out = np.zeros(N1)
for r in xrange(N1):
tmp = 0
for c in xrange(N2):
n, d = num[r, c], den[r, c] - 1
z1 = source_z[c] ** n
z2 = z[r] ** d
tmp += matrix[r, c] * e ** ((n + d - 1) / 2.0) * z1 * z2
out[r] = tmp
return out
Unfortunatelly, instead of a speedup, the Numba implementation is several times slower than the original. I can't figure out how to properly use Numba. Any Numba gurus out there than can give me a hand?
Actually I don't think there is much you can do to speedup your numba function without having some more insights into the properties of your arrays (is there some mathematical tricks to get some calculations done more quickly).
But I noticed one error: you didn't conjugate your array in the numba version for example and I edited some lines to make it more streamline (some of which might only be taste). I've included comments on the appropriate places:
#nb.njit
def twoFreq(z, source_z, num, den, matrix, e):
#Replace z with conjugate of z (otherwise the result is wrong!)
z = np.conj(z)
# Size instead of len() don't know if it actually makes a difference but it's cleaner
N1, N2 = z.size, source_z.size
# Must be zeros_like otherwise you create a float array where you want a complex one
out = np.zeros_like(z)
# I'm using python 3 so you need to replace this by xrange later
for r in range(N1):
for c in range(N2):
n, d = num[r, c], den[r, c] - 1
z1 = source_z[c] ** n
z2 = z[r] ** d
# Multiply with 0.5 instead of dividing by 2
# Work on the out array directly instead of a tmp variable
out[r] += matrix[r, c] * e ** ((n + d - 1) * 0.5) * z1 * z2
return out
def twoFreq_orig(z, source_z, num, den, matrix, e):
Z1, Z2 = np.meshgrid(source_z, np.conj(z))
Z1 **= num
Z2 **= den - 1
M = (e ** ((num + den - 2) / 2.0)) * Z1 * Z2
return np.sum(matrix * M, 1)
numb = 1000
z = np.random.uniform(0,1,numb) + 1j*np.random.uniform(0,1,numb)
source_z = np.random.uniform(0,10,numb) + 1j*np.random.uniform(0,1,numb)
num = np.random.uniform(0,1,(numb,numb))
den = np.random.uniform(0,1,(numb,numb))
matrix = np.random.uniform(0,1,(numb,numb)) + 1j*np.random.uniform(0,1,(numb, numb))
e = 5.5
# This failed for your initial version:
np.testing.assert_array_almost_equal(twoFreq(z, source_z, num, den, matrix, e),
twoFreq_orig(z, source_z, num, den, matrix, e))
And the runtimes on my computer were:
%timeit twoFreq(z, source_z, num, den, matrix, e)
1 loop, best of 3: 246 ms per loop
%timeit twoFreq_orig(z, source_z, num, den, matrix, e)
1 loop, best of 3: 344 ms per loop
It's approximatly 30% faster than your numpy-solution. But I think the numpy solution could be made a bit faster with clever usage of broadcasting. But nevertheless, most of the speedup I got was from omitting the signature: notice that you probably use C-contiguous arrays but you have given an arbitary ordering (so numba might be a bit slower depending on the computer architecture). Probably by defining c16[::-1] you'll get the same speed but generally just let numba infer the type, it will probably be as fast as it can be. Exception: You want different precision inputs for each variable (for example you want z to be complex128 and complex64)
You will get an amazing speedup when your numpy solution runs out of memory (because your numpy solution is vectorized it will need much more RAM!) With numb = 5000 the numba version was approximatly 3x faster than the numpy one.
EDIT:
With clever broadcasting I mean that
np.conj(z[:,None]**(den-1)) * source_z[None, :]**(num)
is equal to
z1, z2 = np.meshgrid(source_z, np.conj(z))
z1**(num) * z2**(den-1)
but with the first variant you only have the power operation on numb elements whereas you have a (numb, numb) shaped array so you perform much more "power" operations than necessary (even though I guess for small arrays the result is probably mostly cached and not very expensive)
The version for numpy without mgrid (which produces the same result) looks like this:
def twoFreq_orig2(z, source_z, num, den, matrix, e):
z1z2 = source_z[None,:]**(num) * np.conj(z)[:, None]**(den-1)
M = (e ** ((num + den - 2) / 2.0)) * z1z2
return np.sum(matrix * M, 1)
Related
This seems more of a direct question. I will generalize it a bit at the end.
I am trying to this function in numpy. I have been successful using nested for loops but I can't think of a numpy way to do it.
My way of implementation:
bs = 10 # batch_size
nb = 8 # number of bounding boxes
nc = 15 # number of classes
bbox = np.random.random(size=(bs, nb, 4)) # model output bounding boxes
p = np.random.random(size=(bs, nb, nc)) # model output probability
p = softmax(p, axis=-1)
s_rand = np.random.random(size=(nc, nc))
s = (s_rand + s_rand.T)/2 # similarity matrix
pp = np.random.random(size=(bs, nb, nc)) # proposed probability
pp = softmax(pp, axis=-1)
first_term = 0
for b in range(nb):
for b_1 in range(nb):
if b_1 == b:
continue
for l in range(nc):
for l_1 in range(nc):
first_term += (s[l, l_1] * (pp[:, b, l] - pp[:, b_1, l_1])**2)
second_term = 0
for b in range(nb):
for l in range(nc):
second_term += (np.linalg.norm(s[l, :], ord=1) * (pp[:, b, l] - p[:, b, l])**2)
second_term *= nb
epsilon = 0.5
output = ((1 - epsilon) * first_term) + (epsilon * second_term)
I have tried hard to remove the loops and use np.tile and np.repeat instead, in order to achieve the task. But can't think of a possible way.
I have tried searching google for finding exercises like such which can help me learn such conversions in numpy but wasn't successful.
P_hat.shape is (B,L), S.shape is (L,L), P.shape is (B,L).
array_before_sum = S[None,:,None,:]*(P_hat[:,:,None,None]- P_hat[None,None,:,:])**2
array_after_sum = array_before_sum.sum(axis=(1,3))
array_sum_again = (array_after_sum*(1-np.ones((B,B)))).sum()
first_term = (1-epsilon)*array_sum_again
second_term = epsilon*(B*np.abs(S).sum(axis=1)[None,:]*(P_hat - P)**2).sum()
I think you can do both with einsum
first_term = np.einsum('km, ijklm -> i', s, (pp[..., None, None] - pp[:, None, None, ...])**2 )
second_term = np.einsum('k, ijk -> i', np.linalg.norm(s, axis = 1), (pp - p)**2 )
Now there's a problem: that ijklm tensor in first_term is going to get huge if nb and nc get large. You should probably distribute it so that you get 3 smaller tensors:
first_term = np.einsum('km, ijk, ijk -> i', s, pp, pp) +\
np.einsum('km, ilm, ilm -> i', s, pp, pp) -\
2 * np.einsum('km, ijk, ilm -> i', s, pp, pp)
This takes advantage of the fact that (a-b)**2 = a**2 + b**2 - 2ab to allow you to break the problem into three parts that can each be done in one step with the dot product
Maximally optimized code: (removal of first two loops is inspired from L.Iridium's answer)
squared_diff = (pp[:, :, None, :, None] - pp[:, None, :, None, :]) ** 2
weighted_diff = s * squared_diff
b_eq_b_1_removed = b.sum(axis=(3,4)) * (1 - np.eye(nb))
first_term = b_eq_b_1_removed.sum(axis=(1,2))
normalized_s = np.linalg.norm(s, ord=1, axis=1)
squared_diff = (pp - p)**2
second_term = nb * (normalized_s * squared_diff).sum(axis=(1,2))
loss = ((1 - epsilon) * first_term) + (epsilon * second_term)
Timeit track:
512 µs ± 13 µs per loop
Timeit track of code posted in question:
62.5 ms ± 197 µs per loop
That's a huge improvement.
In the following code I have implemented Simpsons Rule in Python. I have attempted to plot the absolute error as a function of n for a suitable range of integer values n. I know that the exact result should be 1-cos(pi/2). However my graph doesn't seem to be correct. How can I fix my code to get the correct output? there were two loops and I don't think I implemented my graph coding correctly
def simpson(f, a, b, n):
"""Approximates the definite integral of f from a to b by the composite Simpson's rule, using n subintervals (with n even)"""
h = (b - a) / (n)
s = f(a) + f(b)
diffs = {}
for i in range(1, n, 2):
s += 4 * f(a + i * h)
for i in range(2, n-1, 2):
s += 2 * f(a + i * h)
r = s
exact = 1 - cos(pi/2)
diff = abs(r - exact)
diffs[n] = diff
ordered = sorted(diffs.items())
x,y = zip(*ordered)
plt.autoscale()
plt.loglog(x,y)
plt.xlabel("Intervals")
plt.ylabel("Error")
plt.show()
return s * h / 3
simpson(lambda x: sin(x), 0.0, pi/2, 100)
Your simpson method should just calculate the integral for a single value of n (as it does), but creating the plot for many values of n should be outside that method. as:
from math import pi, cos, sin
from matplotlib import pyplot as plt
def simpson(f, a, b, n):
"""Approximates the definite integral of f from a to b by the composite Simpson's rule, using 2n subintervals """
h = (b - a) / (2*n)
s = f(a) + f(b)
for i in range(1, 2*n, 2):
s += 4 * f(a + i * h)
for i in range(2, 2*n-1, 2):
s += 2 * f(a + i * h)
return s * h / 3
diffs = {}
exact = 1 - cos(pi/2)
for n in range(1, 100):
result = simpson(lambda x: sin(x), 0.0, pi/2, n)
diffs[2*n] = abs(exact - result) # use 2*n or n here, your choice.
ordered = sorted(diffs.items())
x,y = zip(*ordered)
plt.autoscale()
plt.loglog(x,y)
plt.xlabel("Intervals")
plt.ylabel("Error")
plt.show()
Starting with:
a,b=np.ogrid[0:n+1:1,0:n+1:1]
B=np.exp(1j*(np.pi/3)*np.abs(a-b))
B[z,b] = np.exp(1j * (np.pi/3) * np.abs(z - b +x))
B[a,z] = np.exp(1j * (np.pi/3) * np.abs(a - z +x))
B[diag,diag]=1-1j/np.sqrt(3)
this produces an n*n grid that acts as a matrix.
n is just a number chosen to represent the indices, i.e. an a*b matrix where a and b both go up to n.
Where z is a constant I choose to replace a row and column with the B[z,b] and B[a,z] formulas. (Essentially the same formula but with a small number added to the np.abs(a-b))
The diagonal of the matrix is given by the bottom line:
B[diag,diag]=1-1j/np.sqrt(3)
where,
diag=np.arange(n+1)
I would like to repeat this code 50 times where the only thing that changes is x so I will end up with 50 versions of the B np.ogrid. x is a randomly generated number between -0.8 and 0.8 each time.
x=np.random.uniform(-0.8,0.8)
I want to generate 50 versions of B with random values of x each time and take a geometric average of the 50 versions of B using the definition:
def geo_mean(y):
y = np.asarray(y)
return np.prod(y ** (1.0 / y.shape[0]), axis=-1)
I have tried to set B as a function of some index and then use a for _ in range(): loop, this doesn't work. Aside from copy and pasting the block 50 times and denoting each one as B1, B2, B3 etc; I can't think of another way of working this out.
EDIT:
I'm now using part of a given solution in order to show clearly what I am looking for:
#A matrix with 50 random values between -0.8 and 0.8 to be used in the loop
X=np.random.uniform(-0.8,0.8, (50,1))
#constructing the base array before modification by random x values in position z
a,b = np.ogrid[0:n+1:1,0:n+1:1]
B = np.exp(1j * ( np.pi / 3) * np.abs( a - b ))
B[diag,diag] = 1 - 1j / np.sqrt(3)
#list to store all modified arrays
randomarrays = []
for i in range( 0,50 ):
#copy array and modify it
Bnew = np.copy( B )
Bnew[z, b] = np.exp( 1j * ( np.pi / 3 ) * np.abs(z - b + X[i]))
Bnew[a, z] = np.exp( 1j * ( np.pi / 3 ) * np.abs(a - z + X[i]))
randomarrays.append(Bnew)
Bstack = np.dstack(randomarrays)
#calculate the geometric mean value along the axis that was the row in 2D arrays
B0 = geo_mean(Bstack)
From this example, every iteration of i uses the same value of X, I can't seem to get a way to get each new loop of i to use the next value in the matrix X. I am unsure of the ++ action in python, I know it does not work in python, I just don't know how to use the python equivalent. I want a loop to use a value of X, then the next loop to use the next value and so on and so forth so I can dstack all the matrices at the end and find a geo_mean for each element in the stacked matrices.
One pedestrian way would be to use a list comprehension or generator expression:
>>> def f(n, z, x):
... diag = np.arange(n+1)
... a,b=np.ogrid[0:n+1:1,0:n+1:1]
... B=np.exp(1j*(np.pi/3)*np.abs(a-b))
... B[z,b] = np.exp(1j * (np.pi/3) * np.abs(z - b +x))
... B[a,z] = np.exp(1j * (np.pi/3) * np.abs(a - z +x))
... B[diag,diag]=1-1j/np.sqrt(3)
... return B
...
>>> X = np.random.uniform(-0.8, 0.8, (10,))
>>> np.prod((*map(np.power, map(f, 10*(4,), 10*(2,), X), 10 * (1/10,)),), axis=0)
But in your concrete example we can do much better than that;
using the identity exp(a) x exp(b) = exp(a + b) we can convert the geometric mean after exponentiation to an arithmetic mean before exponentition. A bit of care is required because of the multivaluedness of the complex n-th root which occurs in the geometric mean. In the code below we normalize the angles occurring to range -pi, pi so as to always hit the same branch as the n-th root.
Please also note that the geo_mean function you provide is definitely wrong. It fails the basic sanity check that taking the average of copies of the same thing should return the same thing. I've provided a better version. It is still not perfect, but I think there actually is no perfect solution, because of the nonuniqueness of the complex root.
Because of this I recommend taking the average before exponentiating. As long as your random spread is less than pi this allows a well-defined averaging procedure with an average that is actually close to the samples
import numpy as np
def f(n, z, X, do_it_pps_way=True):
X = np.asanyarray(X)
diag = np.arange(n+1)
a,b=np.ogrid[0:n+1:1,0:n+1:1]
B=np.exp(1j*(np.pi/3)*np.abs(a-b))
X = X.reshape(-1,1,1)
if do_it_pps_way:
zbx = np.mean(np.abs(z-b+X), axis=0)
azx = np.mean(np.abs(a-z+X), axis=0)
else:
zbx = np.mean((np.abs(z-b+X)+3) % 6 - 3, axis=0)
azx = np.mean((np.abs(a-z+X)+3) % 6 - 3, axis=0)
B[z,b] = np.exp(1j * (np.pi/3) * zbx)
B[a,z] = np.exp(1j * (np.pi/3) * azx)
B[diag,diag]=1-1j/np.sqrt(3)
return B
def geo_mean(y):
y = np.asarray(y)
dim = len(y.shape)
y = np.atleast_2d(y)
v = np.prod(y, axis=0) ** (1.0 / y.shape[0])
return v[0] if dim == 1 else v
def geo_mean_correct(y):
y = np.asarray(y)
return np.prod(y ** (1.0 / y.shape[0]), axis=0)
# demo that orig geo_mean is wrong
B = np.exp(1j * np.random.random((5, 5)))
# the mean of four times the same thing should be the same thing:
if not np.allclose(B, geo_mean([B, B, B, B])):
print('geo_mean failed')
if np.allclose(B, geo_mean_correct([B, B, B, B])):
print('but geo_mean_correct works')
n, z, m = 10, 3, 50
X = np.random.uniform(-0.8, 0.8, (m,))
B0 = f(n, z, X, do_it_pps_way=False)
B1 = np.prod((*map(np.power, map(f, m*(n,), m*(z,), X), m * (1/m,)),), axis=0)
B2 = geo_mean_correct([f(n, z, x) for x in X])
# This is the recommended way:
B_recommended = f(n, z, X, do_it_pps_way=True)
print()
print(np.allclose(B1, B0))
print(np.allclose(B2, B1))
I think you should rely more on numpy functionality, when approaching your problem. Not a numpy expert myself, so there is surely room for improvement:
from scipy.stats import gmean
n = 2
z = 1
a = np.arange(n + 1).reshape(1, n + 1)
#constructing the base array before modification by random x values in position z
B = np.exp(1j * (np.pi / 3) * np.abs(a - a.T))
B[a, a] = 1 - 1j / np.sqrt(3)
#list to store all modified arrays
random_arrays = []
for _ in range(50):
#generate random x value
x=np.random.uniform(-0.8, 0.8)
#copy array and modify it
B_new = np.copy(B)
B_new[z, a] = np.exp(1j * (np.pi / 3) * np.abs(z - a + x))
B_new[a, z] = np.exp(1j * (np.pi / 3) * np.abs(a - z + x))
random_arrays.append(B_new)
#store all B arrays as a 3D array
B_stack = np.stack(random_arrays)
#calculate the geometric mean value along the axis that was the row in 2D arrays
geom_mean_for_rows = gmean(B_stack, axis = 2)
It uses the geometric mean function from scipy.stats module to have a vectorised approach for this calculation.
I have a 3D numpy array A of shape (2133, 3, 3). Basically this is a list of 2133 lists with three 3D points. Furthermore I have a function which takes three 3D points and returns one 3D point, x = f(a, b, c), with a, b, c, x numpy arrays of length 3. Now I want to apply f to A, so that the output is an array of shape (2133, 3). So something like numpy.array([f(*A[0]),...,f(*A[2132])).
I tried numpy.apply_along_axis and numpy.vectorize without success.
To be more precise the function f I consider is given by:
def f(a, b, c, r1, r2=None, r3=None):
a = np.asarray(a)
b = np.asarray(b)
c = np.asarray(c)
if np.linalg.matrix_rank(np.matrix([a, b, c])) != 3:
# raise ValueError('The points are not collinear.')
return None
a, b, c, = sort_triple(a, b, c)
if any(r is None for r in (r2, r3)):
r2, r3 = (r1, r1)
ex = (b - a) / (np.linalg.norm(b - a))
i = np.dot(ex, c - a)
ey = (c - a - i*ex) / (np.linalg.norm(c - a - i*ex))
ez = np.cross(ex, ey)
d = np.linalg.norm(b - a)
j = np.dot(ey, c - a)
x = (pow(r1, 2) - pow(r2, 2) + pow(d, 2)) / (2 * d)
y = ((pow(r1, 2) - pow(r3, 2) + pow(i, 2) + pow(j, 2)) / (2*j)) - ((i/j)*x)
z_square = pow(r1, 2) - pow(x, 2) - pow(y, 2)
if z_square >= 0:
z = np.sqrt(z_square)
intersection = a + x * ex + y*ey + z*ez
return intersection
A = np.array([[[131.83, 25.2, 0.52], [131.51, 22.54, 0.52],[133.65, 23.65, 0.52]], [[13.02, 86.98, 0.52], [61.02, 87.12, 0.52],[129.05, 87.32, 0.52]]])
r1 = 1.7115
Thanks to the great help of #jdehesa I was able to produce an alternative solution to the one given by #hpaulj. I am not sure if this solution is the most elegant one but it worked so far. Comments are appreciated.
def sort_triple(a, b, c):
pts = np.stack((a, b, c), axis=1)
xSorted = pts[np.arange(pts.shape[0])[:, None], np.argsort(pts[:, :, 0])]
orientation = np.cross(xSorted[:, 1] - xSorted[:, 0], xSorted[:, 2] -
xSorted[:, 0])[:, 2] >= 0
xSorted_flipped = np.stack((xSorted[:, 0], xSorted[:, 2], xSorted[:, 1]),
axis=1)
xSorted = np.where(orientation[:, np.newaxis, np.newaxis], xSorted,
xSorted_flipped)
return map(np.squeeze, np.split(xSorted, 3, axis=1))
def f(A, r1, r2=None, r3=None):
a, b, c = map(np.squeeze, np.split(A, 3, axis=1))
a, b, c = sort_triple(a, b, c)
if any(r is None for r in (r2, r3)):
r2, r3 = (r1, r1)
ex = (b - a) / (np.linalg.norm(b - a, axis=1))[:, np.newaxis]
i = inner1d(ex, (c - a))
ey = ((c - a - i[:, np.newaxis]*ex) /
(np.linalg.norm(c - a - i[:, np.newaxis]*ex, axis=1))[:, np.newaxis])
ez = np.cross(ex, ey)
d = np.linalg.norm(b - a, axis=1)
j = inner1d(ey, c - a)
x = (np.square(r1) - np.square(r2) + np.square(d)) / (2 * d)
y = ((np.square(r1) - np.square(r3) + np.square(i) + np.square(j)) / (2*j) -
i/j*x)
z_square = np.square(r1) - np.square(x) - np.square(y)
mask = z_square < 0
z_square[mask] *= 0
z = np.sqrt(z_square)
z[mask] = np.nan
intersection = (a + x[:, np.newaxis] * ex + y[:, np.newaxis] * ey +
z[:, np.newaxis] * ez)
return intersection
Probably the map parts in each function could be done better. Maybe also the excessive use of np.newaxis.
This works fine (after commenting out sort_triple):
res = [f(*row,r1) for row in A]
print(res)
producing:
[array([ 132.21182324, 23.80481826, 1.43482849]), None]
That looks like one row produced a (3,) array, the other had some sort of problem and produced None. I don't know if that None was due to removing the sort or not. But in any case, turning a mix of arrays and None back into an array would be a problem. If all items of res were matching arrays, we could stack them back into a 2d array.
There are ways of getting modest speed improvements (compared to this list comprehension). But with a complex function like this, the time spent in the function (called 2000 times) dominates the time spent by the iteration mechanism.
And since you are iterating on the 1st dimension, and passing the other 2 (as 3 arrays), this explicit loop is a lot easier to use than vectorize, frompyfunc or apply_along/over...
To get significant time savings you have to write f() to work with the 3d array directly.
I want to repeatedly calculate a two-dimensional complex integral using dblquad from scipy.integrate. As the number of evaluations will be quite high I would like to increase the evaluation speed of my code.
Dblquad does not seem to be able to handle complex integrands. Thus, I have split the complex integrand into a real and an imaginary part:
def integrand_real(x, y):
R1=sqrt(x**2 + (y-y0)**2 + z**2)
R2=sqrt(x**2 + y**2 + zxp**2)
return real(exp(1j*k*(R1-R2)) * (-1j*z/lam/R2/R1**2) * (1+1j/k/R1))
def integrand_imag(x,y):
R1=sqrt(x**2 + (y-y0)**2 + z**2)
R2=sqrt(x**2 + y**2 + zxp**2)
return imag(exp(1j*k*(R1-R2)) * (-1j*z/lam/R2/R1**2) * (1+1j/k/R1))
y0, z, zxp, k, and lam are variables defind in advance. To evaluate the integral over the area of a circle with radius ra I use the following commands:
from __future__ import division
from scipy.integrate import dblquad
from pylab import *
def ymax(x):
return sqrt(ra**2-x**2)
lam = 0.000532
zxp = 5.
z = 4.94
k = 2*pi/lam
ra = 1.0
res_real = dblquad(integrand_real, -ra, ra, lambda x: -ymax(x), lambda x: ymax(x))
res_imag = dblquad(integrand_imag, -ra, ra, lambda x: -ymax(x), lambda x: ymax(x))
res = res_real[0]+ 1j*res_imag[0]
According to the profiler the two integrands are evaluated about 35000 times. The total calculation takes about one second, which is too long for the application I have in mind.
I am a beginner to scientific computing with Python and Scipy and would be happy about comments that point out ways of improving the evaluation speed. Are there ways of rewriting the commands in the integrand_real and integrand_complex functions that could lead to siginficant speed improvements?
Would it make sense to compile those functions using tools like Cython? If yes: Which tool would best fit this application?
You can gain a factor of about 10 in speed by using Cython, see below:
In [87]: %timeit cythonmodule.doit(lam=lam, y0=y0, zxp=zxp, z=z, k=k, ra=ra)
1 loops, best of 3: 501 ms per loop
In [85]: %timeit doit()
1 loops, best of 3: 4.97 s per loop
This is probably not enough, and the bad news is that this is probably
quite close (maybe factor of 2 at most) to everything-in-C/Fortran speed
--- if using the same algorithm for adaptive integration. (scipy.integrate.quad
itself is already in Fortran.)
To get further, you'd need to consider different ways to do the
integration. This requires some thinking --- can't offer much from
the top of my head now.
Alternatively, you can reduce the tolerance up to which the integral
is evaluated.
# Do in Python
#
# >>> import pyximport; pyximport.install(reload_support=True)
# >>> import cythonmodule
cimport numpy as np
cimport cython
cdef extern from "complex.h":
double complex csqrt(double complex z) nogil
double complex cexp(double complex z) nogil
double creal(double complex z) nogil
double cimag(double complex z) nogil
from libc.math cimport sqrt
from scipy.integrate import dblquad
cdef class Params:
cdef public double lam, y0, k, zxp, z, ra
def __init__(self, lam, y0, k, zxp, z, ra):
self.lam = lam
self.y0 = y0
self.k = k
self.zxp = zxp
self.z = z
self.ra = ra
#cython.cdivision(True)
def integrand_real(double x, double y, Params p):
R1 = sqrt(x**2 + (y-p.y0)**2 + p.z**2)
R2 = sqrt(x**2 + y**2 + p.zxp**2)
return creal(cexp(1j*p.k*(R1-R2)) * (-1j*p.z/p.lam/R2/R1**2) * (1+1j/p.k/R1))
#cython.cdivision(True)
def integrand_imag(double x, double y, Params p):
R1 = sqrt(x**2 + (y-p.y0)**2 + p.z**2)
R2 = sqrt(x**2 + y**2 + p.zxp**2)
return cimag(cexp(1j*p.k*(R1-R2)) * (-1j*p.z/p.lam/R2/R1**2) * (1+1j/p.k/R1))
def ymax(double x, Params p):
return sqrt(p.ra**2 + x**2)
def doit(lam, y0, k, zxp, z, ra):
p = Params(lam=lam, y0=y0, k=k, zxp=zxp, z=z, ra=ra)
rr, err = dblquad(integrand_real, -ra, ra, lambda x: -ymax(x, p), lambda x: ymax(x, p), args=(p,))
ri, err = dblquad(integrand_imag, -ra, ra, lambda x: -ymax(x, p), lambda x: ymax(x, p), args=(p,))
return rr + 1j*ri
Have you considered multiprocessing (multithreading)? It seems that you don't have a need to do a final integration (over the whole set) so simple parallel processing might be the answer. Even if you did have to integrate, you can wait for running threads to finish computation before doing the final integration. That is, you can block the main thread until all workers have completed.
http://docs.python.org/2/library/multiprocessing.html
quadpy (a project of mine) supports many integration schemes for functions over disks. It supports complex-valued functions and is fully vectorized. For example with Peirce's scheme of order 83:
from numpy import sqrt, pi, exp
import quadpy
lam = 0.000532
zxp = 5.0
z = 4.94
k = 2 * pi / lam
ra = 1.0
y0 = 0.0
def f(X):
x, y = X
R1 = sqrt(x ** 2 + (y - y0) ** 2 + z ** 2)
R2 = sqrt(x ** 2 + y ** 2 + zxp ** 2)
return exp(1j * k * (R1 - R2)) * (-1j * z / lam / R2 / R1 ** 2) * (1 + 1j / k / R1)
scheme = quadpy.disk.peirce_1957(20)
val = scheme.integrate(f, [0.0, 0.0], ra)
print(val)
(18.57485726096671+9.619636385589759j)