Pycuda Blocks and Grids to work with big datas - python

I need help to know the size of my blocks and grids.
I'm building a python app to perform metric calculations based on scipy as: Euclidean distance, Manhattan, Pearson, Cosine, joined other.
The project is PycudaDistances.
It seems to work very well with small arrays. When I perform a more exhaustive test, unfortunately it did not work. I downloaded movielens set (http://www.grouplens.org/node/73).
Using Movielens 100k, I declared an array with shape (943, 1682). That is, users are 943 and 1682 films evaluated. The films not by a classifier user I configured the value to 0.
With a much larger array algorithm no longer works. I face the following error:
pycuda._driver.LogicError: cuFuncSetBlockShape failed: invalid value.
Researching this error, I found an explanation of telling Andrew that supports 512 threads to join and to work with larger blocks it is necessary to work with blocks and grids.
I wanted a help to adapt the algorithm Euclidean distance arrays to work from small to giant arrays.
def euclidean_distances(X, Y=None, inverse=True):
X, Y = check_pairwise_arrays(X,Y)
rows = X.shape[0]
cols = Y.shape[0]
solution = numpy.zeros((rows, cols))
solution = solution.astype(numpy.float32)
kernel_code_template = """
#include <math.h>
__global__ void euclidean(float *x, float *y, float *solution) {
int idx = threadIdx.x + blockDim.x * blockIdx.x;
int idy = threadIdx.y + blockDim.y * blockIdx.y;
float result = 0.0;
for(int iter = 0; iter < %(NDIM)s; iter++) {
float x_e = x[%(NDIM)s * idy + iter];
float y_e = y[%(NDIM)s * idx + iter];
result += pow((x_e - y_e), 2);
}
int pos = idx + %(NCOLS)s * idy;
solution[pos] = sqrt(result);
}
"""
kernel_code = kernel_code_template % {
'NCOLS': cols,
'NDIM': X.shape[1]
}
mod = SourceModule(kernel_code)
func = mod.get_function("euclidean")
func(drv.In(X), drv.In(Y), drv.Out(solution), block=(cols, rows, 1))
return numpy.divide(1.0, (1.0 + solution)) if inverse else solution
For more details see: https://github.com/vinigracindo/pycudaDistances/blob/master/distances.py

To size the execution paramters for your kernel you need to do two things (in this order):
1. Determine the block size
Your block size will mostly be determined by hardware limitations and performance. I recommend reading this answer for more detailed information, but the very short summary is that your GPU has a limit on the total number of threads per block it can run, and it has a finite register file, shared and local memory size. The block dimensions you select must fall inside these limits, otherwise the kernel will not run. The block size can also effect the performance of kernel, and you will find a block size which gives optimal performance. Block size should always be a round multiple of the warp size, which is 32 on all CUDA compatible hardware released to date.
2. Determine the grid size
For the sort of kernel you have shown, the number of blocks you need is directly related to the amount of input data and the dimensions of each block.
If, for example, your input array size was 943x1682, and you had a 16x16 block size, you would need a 59 x 106 grid, which would yield 944x1696 threads in the kernel launch. In this case the input data size is not a round multiple of the block size, you will need to modify your kernel to ensure that it doesn't read out-of-bounds. One approach could be something like:
__global__ void euclidean(float *x, float *y, float *solution) {
int idx = threadIdx.x + blockDim.x * blockIdx.x;
int idy = threadIdx.y + blockDim.y * blockIdx.y;
if ( ( idx < %(NCOLS)s ) && ( idy < %(NDIM)s ) ) {
.....
}
}
The python code to launch the kernel could look like something similar to:
bdim = (16, 16, 1)
dx, mx = divmod(cols, bdim[0])
dy, my = divmod(rows, bdim[1])
gdim = ( (dx + (mx>0)) * bdim[0], (dy + (my>0)) * bdim[1]) )
func(drv.In(X), drv.In(Y), drv.Out(solution), block=bdim, grid=gdim)
This question and answer may also help understand how this process works.
Please note that all of the above code was written in the browser and has never been tested. Use it at your own risk.
Also note it was based on a very brief reading of your code and might not be correct because you have not really described anything about how the code is called in your question.

The accepted answer is correct in principle, however the code that talonmies has listed is not quite correct. The line:
gdim = ( (dx + (mx>0)) * bdim[0], (dy + (my>0)) * bdim[1]) )
Should Be:
gdim = ( (dx + (mx>0)), (dy + (my>0)) )
Besides an obvious extra parenthesis, gdim would produce way too many threads than what you want. talonmies had explained it right in his text that threads is the blocksize * gridSize. However the gdim he has listed would give you the total threads and not the correct grid size which is what is desired.

Related

Working with very large matrices in numpy

I have a transition matrix for which I want to calculate a steady state vector. The code I'm using is adapted from this question, and it works well for matrices of normal size:
def steady_state(matrix):
dim = matrix.shape[0]
q = (matrix - np.eye(dim))
ones = np.ones(dim)
q = np.c_[q, ones]
qtq = np.dot(q, q.T)
bqt = np.ones(dim)
return np.linalg.solve(qtq, bqt)
However, the matrix I'm working with has about 1.5 million rows and columns. It isn't a sparse matrix either; most entries are small but non-zero. Of course, just trying to build that matrix throws a memory error.
How can I modify the above code to work with huge matrices? I've heard of solutions like PyTables, but I'm not sure how to apply them, and I don't know if they would work for tasks like np.linalg.solve.
Being very new to numpy and very inexperienced with linear algebra, I'd very much appreciate an example of what to do in my case. I'm open to using something other than numpy, and even something other than Python if needed.
Here's some ideas to start with:
We can use the fact that any initial probability vector will converge on the steady state under time evolution (assuming it's ergodic, aperiodic, regular, etc).
For small matrices we could use
def steady_state(matrix):
dim = matrix.shape[0]
prob = np.ones(dim) / dim
other = np.zeros(dim)
while np.linalg.norm(prob - other) > 1e-3:
other = prob.copy()
prob = other # matrix
return prob
(I think the conventions assumed by the function in the question is that distributions go in rows).
Now we can use the fact that matrix multiplication and norm can be done chunk by chunk:
def steady_state_chunk(matrix, block_in=100, block_out=10):
dim = matrix.shape[0]
prob = np.ones(dim) / dim
error = 1.
while error > 1e-3:
error = 0.
other = prob.copy()
for i in range(0, dim, block_out):
outs = np.s_[i:i+block_out]
vec_out = np.zeros(block_out)
for j in range(0, dim, block_in):
ins = np.s_[j:j+block_in]
vec_out += other[ins] # matrix[ins, outs]
error += np.linalg.norm(vec_out - prob[outs])**2
prob[outs] = vec_out
error = np.sqrt(error)
return prob
This should use less memory for temporaries, thought you could do better by using the out parameter of np.matmul.
I should add something to deal with the last slice in each loop, in case dim isn't divisible by block_*, but I hope you get the idea.
For arrays that don't fit in memory to start with, you can apply the tools from the links in the comments above.

Why do we use enumerate in this program?

i`m having a class called numerical methods, where we learn how to write programs for certain problems in physics. We had to write 4 programs which could solve ODEs (implicit/explicit euler, velocity-verlet, implicit midpoint rule), now we have to calculate the error by using |y_N - y(T)|. We already have a template which we need to fill out.
This is the code which we have to complete.
def ex2_d():
T = 0.2
y0 = np.array([0.3, 0.0])
all_methods = [explicit_euler, implicit_euler, implicit_mid_point, velocity_verlet]
all_rhs = 3*[pendulum_rhs] + [pendulum_verlet_rhs]
resolutions = 2**np.arange(4, 11)
_, y_exact = ode45(pendulum_rhs, (0.0, T), y0, reltol=1e-12)
for method, rhs in zip(all_methods, all_rhs):
error = np.empty(resolutions.size)
for k, N in enumerate(resolutions):
# TODO: Berechen Sie die Lösung und den Fehler
error[k] = np.absolute(methode())
rate = convergence_rate(error, resolutions)
print(method)
print("rate: " + str(rate) + "\n")
The only thing I need to fill out is the TODO part. But I don`t understand, the for loop, which is looping over k and N in enumerate(resolution), and why is the resolution array declared as it is anyways?
Thank you in advance for your help!
In numerically solving an ODE, you want to have doubling resolutions (halving step sizes), to find the convergence rate, using the standard method:
(u_h - u_(h/2))/(u_(h/2) - u_(h/4)) = 2^p + O(h)
with u_h the numerical solution at a step h, u_(h/2) the solution with a step h/2 (e.g. double resolution) and u_(h/4) the solution with a step h/4 (e.g. again double resolution). The order of the error is p, which gives a convergence rate of h^p
This is why the resolutions are declared as 2**np.arange(4,11), which gives[ 16, 32, 64, 128, 256, 512, 1024]`. (You can use other grid sizes, which will change the formula accordingly. For more information, see this.
To store the errors in a list, you need the corresponding indices of the resolutions, which is why enumerate is used:
enumerate(resolutions) -> [(0,16), (1,32), (2,64), (3,128), (4,256), (5,512), (6,1024)]
which is unpacked by the for loop:
iteration k N
1 0 16
2 1 32
etc.
The aim of this excercise is to compare different methods for solving the differential equation given by pendulum_rhs.
The quantity by which the comparison takes place is the convergence rate. In order to determine this rate you need to solve the DE with variing resolution (of the underlying grid) and compute the error for every resolution.
The resolutions to use are given: resolutions =[16, 32, 64, ...].
So for a given method method, you iterate over the resolutions:
for k in range(len(resolutions)):
N = resolutions[k]
# calculate the result using N
result = method(..., N, ...)
#store it in an array called
error[k] = np.abs(y_exact - result)

Struct pointers in Cython with GSL Monte-Carlo minimization

I'm stuck on this exercise and am not good enough to resolve it. Basically I am writing a Monte-Carlo Maximum Likelihood algorithm for the Bernoulli distribution. The problem is that I have to pass the data as the parameter to the GSL minimization (one-dim) algorithm, and need to also pass the size of the data (since the outer loop are the different sample sizes of the "observed" data). So I'm attempting to pass these parameters as a struct. However, I'm running into seg faults and I'm SURE it is coming from the portion of the code that concerns the struct and treating it as a pointer.
[EDIT: I have corrected for allocation of the struct and its components]
%%cython
#!python
#cython: boundscheck=False, wraparound=False, nonecheck=False, cdivision=True
from libc.stdlib cimport rand, RAND_MAX, calloc, malloc, realloc, free, abort
from libc.math cimport log
#Use the CythonGSL package to get the low-level routines
from cython_gsl cimport *
######################### Define the Data Structure ############################
cdef struct Parameters:
#Pointer for Y data array
double* Y
#size of the array
int* Size
################ Support Functions for Monte-Carlo Function ##################
#Create a function that allocates the memory and verifies integrity
cdef void alloc_struct(Parameters* data, int N, unsigned int flag) nogil:
#allocate the data array initially
if flag==1:
data.Y = <double*> malloc(N * sizeof(double))
#reallocate the data array
else:
data.Y = <double*> realloc(data.Y, N * sizeof(double))
#If the elements of the struct are not properly allocated, destory it and return null
if N!=0 and data.Y==NULL:
destroy_struct(data)
data = NULL
#Create the destructor of the struct to return memory to system
cdef void destroy_struct(Parameters* data) nogil:
free(data.Y)
free(data)
#This function fills in the Y observed variable with discreet 0/1
cdef void Y_fill(Parameters* data, double p_true, int* N) nogil:
cdef:
Py_ssize_t i
double y
for i in range(N[0]):
y = rand()/<double>RAND_MAX
if y <= p_true:
data.Y[i] = 1
else:
data.Y[i] = 0
#Definition of the function to be maximized: LLF of Bernoulli
cdef double LLF(double p, void* data) nogil:
cdef:
#the sample structure (considered the parameter here)
Parameters* sample
#the total of the LLF
double Sum = 0
#the loop iterator
Py_ssize_t i, n
sample = <Parameters*> data
n = sample.Size[0]
for i in range(n):
Sum += sample.Y[i]*log(p) + (1-sample.Y[i])*log(1-p)
return (-(Sum/n))
########################## Monte-Carlo Function ##############################
def Monte_Carlo(int[::1] Samples, double[:,::1] p_hat,
Py_ssize_t Sims, double p_true):
#Define variables and pointers
cdef:
#Data Structure
Parameters* Data
#iterators
Py_ssize_t i, j
int status, GSL_CONTINUE, Iter = 0, max_Iter = 100
#Variables
int N = Samples.shape[0]
double start_val, a, b, tol = 1e-6
#GSL objects and pointer
const gsl_min_fminimizer_type* T
gsl_min_fminimizer* s
gsl_function F
#Set the GSL function
F.function = &LLF
#Allocate the minimization routine
T = gsl_min_fminimizer_brent
s = gsl_min_fminimizer_alloc(T)
#allocate the struct
Data = <Parameters*> malloc(sizeof(Parameters))
#verify memory integrity
if Data==NULL: abort()
#set the starting value
start_val = rand()/<double>RAND_MAX
try:
for i in range(N):
if i==0:
#allocate memory to the data array
alloc_struct(Data, Samples[i], 1)
else:
#reallocate the data array in the struct if
#we are past the first run of outer loop
alloc_struct(Data, Samples[i], 2)
#verify memory integrity
if Data==NULL: abort()
#pass the data size into the struct
Data.Size = &Samples[i]
for j in range(Sims):
#fill in the struct
Y_fill(Data, p_true, Data.Size)
#set the parameters for the GSL function (the samples)
F.params = <void*> Data
a = tol
b = 1
#set the minimizer
gsl_min_fminimizer_set(s, &F, start_val, a, b)
#initialize conditions
GSL_CONTINUE = -2
status = -2
while (status == GSL_CONTINUE and Iter < max_Iter):
Iter += 1
status = gsl_min_fminimizer_iterate(s)
start_val = gsl_min_fminimizer_x_minimum(s)
a = gsl_min_fminimizer_x_lower(s)
b = gsl_min_fminimizer_x_upper(s)
status = gsl_min_test_interval(a, b, tol, 0.0)
if (status == GSL_SUCCESS):
print ("Converged:\n")
p_hat[i,j] = start_val
finally:
destroy_struct(Data)
gsl_min_fminimizer_free(s)
with the following python code to run the above function:
import numpy as np
#Sample Sizes
N = np.array([5,50,500,5000], dtype='i')
#Parameters for MC
T = 1000
p_true = 0.2
#Array of the outputs from the MC
p_hat = np.empty((N.size,T), dtype='d')
p_hat.fill(np.nan)
Monte_Carlo(N, p_hat, T, p_true)
I have separately tested the struct allocation and it works, doing what it should do. However, while funning the Monte Carlo the kernel is killed with an abort call (per the output on my Mac) and the Jupyter output on my console is the following:
gsl: fsolver.c:39: ERROR: computed function value is infinite or NaN
Default GSL error handler invoked.
It seems now that the solver is not working. I'm not familiar with the GSL package, having used it only once to generate random numbers from the gumbel distribution (bypassing the scipy commands).
I would appreciate any help on this! Thanks
[EDIT: Change lower bound of a]
Redoing the exercise with the exponential distribution, whose log likelihood function contains just one log I've honed down the problem having been with gsl_min_fminimizer_set initially evaluating at the lower bound of a at 0 yielding the -INF result (since it evaluates the problem prior to solving to generate f(lower), f(upper) where f is my function to optimise). When I set the lower bound to something other than 0 but really small (say the tol variable of my defined tolerance) the solution algorithm works and yields the correct results.
Many thanks #DavidW for the hints to get me to where I needed to go.
This is a somewhat speculative answer since I don't have GSL installed so struggle to test it (so apologies if it's wrong!)
I think the issue is the line
Sum += sample.Y[i]*log(p) + (1-sample.Y[i])*log(1-p)
It looks like Y[i] can be either 0 or 1. When p is at either end of the range 0-1 it gives 0*-inf = nan. In the case where only all the 'Y's are the same this point is the minimum (so the solver will reliably end up at the invalid point). Fortunately you should be able to rewrite the line to avoid getting a nan:
if sample.Y[i]:
Sum += log(p)
else:
Sum += log(1-p)
(the case which will generate the nan is the one not executed).
There's a second minor issue I've spotted: in alloc_struct you do data = NULL in case of an error. This only affects the local pointer, so your test for NULL in Monte_Carlo is meaningless. You'd be better returning a true or false flag from alloc_struct and checking that. I doubt if you're hitting this error though.
Edit: Another better option would be to find the minimum analytically: the derivative of A log(p) + (1-A) log (1-p) is A/p - (1-A)/(1-p). Average all the sample.Ys to find A. Finding the place where the derivative is 0 gives p=A. (You'll want to double-check my working!). With this you can avoid having to use the GSL minimization routines.

Parameter restrictions for Nelson-Siegel yield curve in quantlib

I am using QL in Python and have translated parts of the example file
http://quantlib.org/reference/_fitted_bond_curve_8cpp-example.html#_a25;
of how to fit a yield curve with bonds in order to fit a Nelson-Siegel
yield curve to a set of given calibration bonds.
As usual when performing such a non-linear fit, the results depend strongly
on the initial conditions and many (economically meaningless) minima of the
objective function exist. This is why putting constraints on the parameters
is essential for success. To give an example, at times I get negative
tau/lambda parameters and my yield curve diverges.
I did not find how these parameter constraints can be specified in
the NelsonSiegelFitting or the FittedBondDiscountCurve classes. I could
imagine that anyone performing NS fitting in QL will encounter the same
issue.
Thanks to Andres Hernandez for the answer:
Currently it is not possible. However, it is very easy to extend QL to allow it, but I think it needs to be done on the c++. So even though you are using QL in python, can you modify the c++ code and export a new binding? If yes, then you can use the following code, if not then I could just check it into the code, but it will take some time for the pull request to be accepted. In case you can touch the code, you can add something like this:
in nonlinearfittingmethods.hpp:
class NelsonSiegelConstrainedFitting
: public FittedBondDiscountCurve::FittingMethod {
public:
NelsonSiegelConstrainedFitting(const Array& lower, const Array& upper,
const Array& weights = Array(),
boost::shared_ptr<OptimizationMethod> optimizationMethod
= boost::shared_ptr<OptimizationMethod>());
std::auto_ptr<FittedBondDiscountCurve::FittingMethod> clone() const;
private:
Size size() const;
DiscountFactor discountFunction(const Array& x, Time t) const;
Array lower_, upper_;
};
in nonlinearfittingmethods.cpp:
NelsonSiegelConstrainedFitting::NelsonSiegelConstrainedFitting(
const Array& lower, const Array& upper, const Array& weights,
boost::shared_ptr<OptimizationMethod> optimizationMethod)
: FittedBondDiscountCurve::FittingMethod(true, weights, optimizationMethod),
lower_(lower), upper_(upper){
QL_REQUIRE(lower_.size() == 4, "Lower constraint must have 4 elements");
QL_REQUIRE(upper_.size() == 4, "Lower constraint must have 4 elements");
}
std::auto_ptr<FittedBondDiscountCurve::FittingMethod>
NelsonSiegelConstrainedFitting::clone() const {
return std::auto_ptr<FittedBondDiscountCurve::FittingMethod>(
new NelsonSiegelFitting(*this));
}
Size NelsonSiegelConstrainedFitting::size() const {
return 4;
}
DiscountFactor NelsonSiegelConstrainedFitting::discountFunction(const Array& x,
Time t) const {
///extreme values of kappa result in colinear behaviour of x[1] and x[2], so it should be constrained not only
///to be positive, but also not very extreme
Real kappa = lower_[3] + upper_[3]/(1.0+exp(-x[3]));
Real x0 = lower_[0] + upper_[0]/(1.0+exp(-x[0])),
x1 = lower_[1] + upper_[1]/(1.0+exp(-x[1])),
x2 = lower_[2] + upper_[2]/(1.0+exp(-x[2])),;
Real zeroRate = x0 + (x1 + x2)*
(1.0 - std::exp(-kappa*t))/
((kappa+QL_EPSILON)*(t+QL_EPSILON)) -
x2*std::exp(-kappa*t);
DiscountFactor d = std::exp(-zeroRate * t) ;
return d;
}
You then need to add it to the swig interface, but it should be trivial to do so.

Vectorize a Newton method in Python/Numpy

I am trying to figure out if Python/Numpy is a viable alternative to develop my numerical software which is already available in C++. In order to get performance in Python/Numpy, one need to "vectorize" the code. But it turns out that as soon as I move away from very simple examples, I struggle to vectorize the code (I am not talking about SIMD instructions but "efficient Numpy code" without loops). Here is an algorithm that I want to get efficiently in Python/Numpy.
Create an numpy array containing: 1.0, 1.0 + 1/n, 1.0 + 2/n, ..., 2.0
For every u in the array, compute the root of x^2 - u, using a Newton method, stopping when |dx| <= 1.0e-7. Store the result in an array result.
Sum all the elements of the result array
Here is the algorithm in Python I want to speed up
import numpy as np
n = 1000000
data = np.arange(1.0, 2.0, 1.0 / n)
def newton(u):
x = 2.0
while True:
f = x**2 - u
df_dx = 2 * x
dx = f / df_dx
if (abs(dx) <= 1.0e-7):
break
x -= dx
return x
result = map(newton, data)
print result[n - 1]
Here is a version of the algorithm in C++11
#include <iostream>
#include <vector>
#include <cmath>
int main (int argc, char const *argv[]) {
auto n = std::size_t{100000000};
auto v = std::vector<double>(n + 1);
for(size_t k = 0; k < v.size(); ++k) {
v[k] = 1.0 + static_cast<double>(k) / n;
}
auto result = std::vector<double>(n + 1);
for(size_t k = 0; k < v.size(); ++k) {
auto x = double{2.0};
while(true) {
auto f = double{x * x - v[k]};
auto df_dx = double{2 * x};
auto dx = double{f / df_dx};
if (std::abs(dx) <= 1.0e-7) {
break;
}
x -= dx;
}
result[k] = x;
}
auto somme = double{0.0};
for(size_t k = 0; k < result.size(); ++k) {
somme += result[k];
}
std::cout << somme << std::endl;
return 0;
}
It takes 2.9 seconds to run on my machine. Is there a way to make a fast Python/Numpy algorithm that does the same thing (I am willing to get something that is less than 5 times slower).
Thanks.
You can do step 1. with numpy efficiently:
1.0 + np.arange(n + 1) / n
however I think you would need the np.vectorize() method to feed back x into your calculated values and it's not an efficient function (basically a wrapper for a python loop). If you can use scipy then there are built in methods that might do what you want http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.optimize.newton.html
EDIT: Having thought a bit more about this I followed up on #ev-br's point and tried some alternatives. The masking uses too much processing but the abs().max() is pretty fast so a compromise might be to "divide the problem into blocks" both in the 1st dimension of the array and in iteration direction. The following doesn't do too badly (< 20s) on my pretty low power laptop - certainly much faster than np.vectorize() or any of the scipy solving systems I could find. (If I set m too big it runs out of something (memory?) and grinds to a complete halt!)
n = 100000000
m = 5000000
block = 3
u = 1.0 + np.arange(n + 1) / n
x = np.full(u.shape, 2.0)
dx = np.ones(u.shape)
for i in range(0, n, m):
while np.abs(dx[i:i+m]).max() > 1.0e-7:
for j in range(block):
dx[i:i+m] = (x[i:i+m] ** 2 - u[i:i+m]) / (2 * x[i:i+m])
x[i:i+m] -= dx[i:i+m]
Here's a toy example. Notice that often vectorization means writing your code as if you're manipulating numbers, and letting numpy do its magic:
>>> import numpy as np
>>> a = np.array([1., 2., 3.])
>>> def f(x):
... return x**2 - a, 2.*x # function and derivative
>>>
>>> def newt(f, x0):
... x = np.asarray(x0)
... for _ in range(5): # hardcode the number of iterations (I know)
... v, dv = f(x)
... x -= v / dv
... return x
>>>
>>> newt(f, [1., 1., 1.])
array([ 1. , 1.41421356, 1.73205081])
If this is a performance bottleneck, this is unlikely to be competetive with hand-written C++ code: First of all, you're manipulating python objects with all the overhead; then numpy is likely doing a bunch of array allocations under the hood.
An often viable strategy is to start by writing things in python/numpy, and then move bottlenecks into a compiled code --- eg Cython or C++ wrapped by Cython. In this particular case since you already have the C++ code, just wrapping it with Cython is likely easiest but YMMV.
I'm not looking to wave small snippets of code as a solution, but here's something to get you started. I have a strong suspicion that you're having troubles just declaring such an array in python without spending too much time on it, so I'll mostly help you out there.
As far as the square roots come in, please add your example python code and I'll see what I can help optimize from that point on. In my example roots and sums are found with the default numpy functions/methods.
def summing():
n = 1000000
ar = np.arange(0, n)
ar = ar/float(n)
ar = ar + np.ones(n)
sqrt = np.sqrt(ar)
return np.sum(ar)
In short, to get the starting array it's best to use a "workaround".
initialize an array ar with values `[1,2,3,....n]
divide ar with n. This gets us the 1/n, 2/n ... members
add to that an array of same dimensions that contain just the number 1.0
This gets us the full array [ 1., 1.000001, 1.000002, ..., 1.999998, 1.999999]) we're after. If I understood you right.
find square roots, sum it
Average of 10 sequential execution times is 0.018786 seconds.
Obviously I'm 6 years late to this party, but this question is a common stumbling block for people in effectively using numpy for real scientific work. The basic idea is covered in #ev-br's answer. The OP points out that the solution offered there (even modified to stop iterating when a convergence criterion is met rather than after a fixed number of iterations) takes the same number of passes for each element of u. I want to show how you can avoid that objection using pure numpy code, making explicit the mask suggestion in #ev-br's comment.
However, I also want to point out that in many real world situations, the number of passes for Newton-like iteration to converge varies so little that this general technique I illustrate here will actually slow numpy code down significantly. If the average number of iterations will be within a factor of two or three of the maximum number of iterations, you should stick with something closer to #ev-br's answer (including his first comment).
The numpy performance numbers you need to understand are these: Loops over array indices will run 200 to 500 times slower in pure numpy code than in compiled code. On the other hand, if you manage to use numpy's array syntax to avoid all index loops, you can get within about a factor of 5 of compiled speed. (The factor of 5 is partly because of memory management as #ev-br mentions, but also because optimized compiled code overlaps many different arithmetical operations inside each index loop, while numpy just performs a single arithmetic operation, storing everything back to memory after each operation.) The point is that factor of 100 difference means that it often pays to do substantial amounts of "extra" work in numpy code: Even if you do 10 times the number of floating point operations in vectorized numpy code, it will still run 10 times faster than the index-loop code that avoids the "extra" work. (Incidentally, the python map function is implemented as an interpreted index loop - it has nothing to do with numpy array operations.)
from numpy import asfarray, broadcast_arrays, arange
# Begin by defining the function to be inverted by Newton's method.
def f_dfdx(x):
x = asfarray(x) # always avoid repeated type conversions
return x**2, 2.*x
# First, the simplest algorithm to find x such that f(x)=y.
# We must supply a starting guess x0 for x.
def f_inverse0(f_dfdx, y, x0, tol=1.e-7):
y, x = broadcast_arrays(asfarray(y), asfarray(x0))
x = x.copy() # without this may clobber input x0
for npass in range(20):
f, dfdx = f_dfdx(x)
dx = (f - y) / dfdx
if (abs(dx) <= tol).all():
break # iterate all x until all have converged
x -= dx
else:
raise RuntimeError("failed to converge")
return x
# A frequently slower algorithm that avoids extra iterations.
def f_inverse1(f_dfdx, y, x0, tol=1.e-7):
y, x = broadcast_arrays(asfarray(y), asfarray(x0))
shape = x.shape
y, x = y.ravel(), x.flatten() # avoid clobbering x0
unconverged = arange(y.size)
for npass in range(20):
f, dfdx = f_dfdx(x[unconverged])
dx = (f - y[unconverged]) / dfdx
unc = abs(dx) > tol
unconverged = unconverged[unc]
if not unconverged.size:
break # iterate all x until all have converged
x[unconverged] -= dx[unc]
else:
raise RuntimeError("failed to converge")
return x.reshape(shape)
On my machine, the OP's C++ program runs in 2.03 s (1.64+0.38 user+sys). For n=100 million as for the C++ program, f_inverse0 runs in 20.4 s (4.7+15.6 user+sys). As expected, f_inverse1 is slower, 51.3 s (11.5+39.8 user+sys). Again, don't automatically try to minimize total operation count when you are writing numpy code. The high system overhead is probably due to heavy memory management - every vector temporary is 0.8 GB and the memory manager is struggling.
Cutting the array size to n = 1 million elements (8 MB), then multiplying the runtime by 100 brings the system time down by a large factor, f_inverse0 now takes 16.1 s (12.5+3.6), while f_inverse1 takes 22.3 s (16.2+5.1). This factor of 8 to 10 slower than compiled code is not unreasonable to expect for numpy performance.

Categories

Resources