is there an efficient way to iterate trough all ndarray elements - python

My problem is that i have a ndarray of shape (N,M,3) and i am trying to check each element in the array using a low level approach currently i am doing something like:
for i in range(N):
for j in range(M):
if ndarr[i][j][2] == 3:
ndarr[i][j][0] == var1
and most of the time the ndarray i need to process is very large usually around 1000x1000 .
the same idea i managed to run on cpp withing a couple of milisecondes in python it take around 30 seconds at best.
i would really appreciate if someone can explain to me or point me towards reading material on how to efficiently iterate trough ndarray

There is no way of doing that efficiently.
NumPy is a small Python wrapper around C code/datatypes. So an ndarray is actually a multidimensional C array. That means the memory address of the array is the address of the first element of the array. All other elements are stored consecutively in memory.
What your Python for loop does, is grabbing each element of the array and temporarily saving it somewhere else (as a Python datastructure) before stuffing it back in the C array. As I have said, there is no way of doing that efficiently with a Python loop.
What you could do is using Numba #jit to speed up the for loop or look after a NumPy routine, that can iterate over an array.

you can use logical indexing to do this more efficiently, it might be interesting to see how it compares with your c implementation.
import numpy as np
a = np.random.randn(2, 4, 3)
print(a)
idx = a[:, :, 2] > 0
a[idx, 0] = 9
print(a)

In Numpy you have to use vectorized-commands (usually calling a C or Cython-function) to achieve good performance. As an alternative you can use Numba or Cython.
Two possible Implementations
import numba as nb
import numpy as np
def calc_np(ndarr,var1):
ndarr[ndarr[:,:,0]==3]=var1
return ndarr
#nb.njit(parallel=True,cache=True)
def calc_nb(ndarr,var1):
for i in nb.prange(ndarr.shape[0]):
for j in range(ndarr.shape[1]):
if ndarr[i,j,2] == 3:
ndarr[i,j,0] == var1
return ndarr
Timings
ndarr=np.random.randint(low=0,high=3,size=(1000,1000,3))
%timeit calc_np(ndarr,2)
#780 µs ± 6.78 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
#first call takes longer due to compilation overhead
res=calc_nb(ndarr,2)
%timeit calc(ndarr,2)
#55.2 µs ± 160 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Edit
You also use a wrong indexing method. ndarr[i] gives a 2d view on the original 3d-array, the next indexing operation [j] gives the next view on the previous view. This also has quite an impact on performance.
def calc_1(ndarr,var1):
for i in range(ndarr.shape[0]):
for j in range(ndarr.shape[1]):
if ndarr[i][j][2] == 3:
ndarr[i][j][0] == var1
return ndarr
def calc_2(ndarr,var1):
for i in range(ndarr.shape[0]):
for j in range(ndarr.shape[1]):
if ndarr[i,j,2] == 3:
ndarr[i,j,0] == var1
return ndarr
%timeit calc_1(ndarr,2)
#549 ms ± 11.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit calc_2(ndarr,2)
#321 ms ± 2.24 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

Related

Numba Slow Array Element Assignment to Variable

This is a contrived test case but, hopefully, it can suffice to convey the point and ask the question. Inside of a Numba njit function, I noticed that it is very costly to assign a locally computed value to an array element. Here are two example functions:
from numba import njit
import numpy as np
#njit
def slow_func(x, y):
result = y.sum()
for i in range(x.shape[0]):
if x[i] > result:
x[i] = result
else:
x[i] = result
#njit
def fast_func(x, y):
result = y.sum()
for i in range(x.shape[0]):
if x[i] > result:
z = result
else:
z = result
if __name__ == "__main__":
x = np.random.rand(100_000_000)
y = np.random.rand(100_000_000)
%timeit slow_func(x, y) # 177 ms ± 1.49 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit fast_func(x, y) # 407 ns ± 12.8 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
I understand that the two functions aren't quite doing the same thing but let's not worry about that for now and stay focused on the "slow assignment". Also, due to Numba's lazy initialization, the timing above has been re-run post JIT-compiling. Notice that both functions are assigning result to either x[i] or to z and the number of assignments are the same in both cases. However, the assignment of result to z is substantially faster. Is there a way to make the slow_func as fast as the fast_func?
As #PaulPanzer already has pointed out, your fast function does nothing once optimized - so what you see is basically the overhead of calling a numba-function.
The interesting part is, that in order to do this optimization, numba must be replacing np.sum with its own sum-implementation - otherwise the optimizer would not be able to throw the call to this function away, as it cannot look into the implementation of np.sum and must assume that there are side effects from calling this function.
Let's measure only the summation with numba:
from numba import njit
#njit
def only_sum(x, y):
return y.sum()
%timeit only_sum(y,x)
# 112 ms ± 623 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
.
Well, that is disappointing: I know my machine can do more than 10^9 addition per second and to read up to 13GB/s from RAM (there are about 0.8GB data, so it doesn't fit the cache), which mean I would expect the summation to use between 60-80ms.
And if I use the numpy's version, it really does:
%timeit y.sum()
# 57 ms ± 444 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
That sounds about right! I assume, numba doesn't use the pairwise addition and thus is slower (if the RAM is fast enough to be the bottleneck) and less precise than numpy's version.
If we just look at the writing of the values:
#njit
def only_assign(x, y):
res=y[0]
for i in range(x.shape[0]):
x[i]=res
%timeit only_assign(x,y)
85.2 ms ± 417 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
so we see it is really slower than reading. The reason for that (and how it can be fixed) is explained in this great answer: the update of caches which numba (rightly?) doesn't bypass.
In a nutshell: While assigning of values in numba isn't really slow (even if it could be speed-up by ussing non-temporal memory accesses), the really slow part is the summation (which seems not to use the pairwise summation) - it is inferior to the numpy's version.

why is numpy.linalg.norm slow when called many times for small size data?

import numpy as np
from datetime import datetime
import math
def norm(l):
s = 0
for i in l:
s += i**2
return math.sqrt(s)
def foo(a, b, f):
l = range(a)
s = datetime.now()
for i in range(b):
f(l)
e = datetime.now()
return e-s
foo(10**4, 10**5, norm)
foo(10**4, 10**5, np.linalg.norm)
foo(10**2, 10**7, norm)
foo(10**2, 10**7, np.linalg.norm)
I got the following output:
0:00:43.156278
0:00:23.923239
0:00:44.184835
0:01:00.343875
It seems like when np.linalg.norm is called many times for small-sized data, it runs slower than my norm function.
What is the cause of that?
First of all: datetime.now() isn't appropriate to measure performance, it includes the wall-time and you may just pick a bad time (for your computer) when a high-priority process runs or Pythons GC kicks in, ...
There are dedicated timing functions/modules available in Python: the built-in timeit module or %timeit in IPython/Jupyter and several other external modules (like perf, ...)
Let's see what happens if I use these on your data:
import numpy as np
import math
def norm(l):
s = 0
for i in l:
s += i**2
return math.sqrt(s)
r1 = range(10**4)
r2 = range(10**2)
%timeit norm(r1)
3.34 ms ± 150 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit np.linalg.norm(r1)
1.05 ms ± 3.92 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit norm(r2)
30.8 µs ± 1.53 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit np.linalg.norm(r2)
14.2 µs ± 313 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
It isn't slower for short iterables it's still faster. However note that the real advantage from NumPy functions comes if you already have NumPy arrays:
a1 = np.arange(10**4)
a2 = np.arange(10**2)
%timeit np.linalg.norm(a1)
18.7 µs ± 539 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit np.linalg.norm(a2)
4.03 µs ± 157 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Yeah, it's quite a lot faster now. 18.7us vs. 1ms - almost 100 times faster for 10000 elements. That means most of the time of np.linalg.norm in your examples was spent in converting the range to a np.array.
You are on the right way
np.linalg.norm has a quite high overhead on small arrays. On large arrays both the jit compiled function and np.linalg.norm runs in a memory bottleneck, which is expected on a function that does simple multiplications most of the time.
If the jitted function is called from another jitted function it might get inlined, which can lead to a quite a lot larger advantage over the numpy-norm function.
Example
import numba as nb
import numpy as np
#nb.njit(fastmath=True)
def norm(l):
s = 0.
for i in range(l.shape[0]):
s += l[i]**2
return np.sqrt(s)
Performance
r1 = np.array(np.arange(10**2),dtype=np.int32)
Numba:0.42µs
linalg:4.46µs
r1 = np.array(np.arange(10**2),dtype=np.int32)
Numba:8.9µs
linalg:13.4µs
r1 = np.array(np.arange(10**2),dtype=np.float64)
Numba:0.35µs
linalg:3.71µs
r2 = np.array(np.arange(10**4), dtype=np.float64)
Numba:1.4µs
linalg:5.6µs
Measuring Performance
Call the jit-compiled function one time before the measurement (there is a static compilation overhead on the first call)
Make clear if the measurement is valid (since small arrays stays in processor-cache there may be to optimistic results exceeding your RAM throughput on realistic examples eg. example)

np.sum and np.add.reduce - in production, what do you use?

As a background, please read this quick post and clear answer:
What is the difference between np.sum and np.add.reduce?
So, for a small array, using add.reduce is faster. Let's take the following code which I experimented with for learning, that sums a 2D array:
a = np.array([[1,4,6],[3,1,2]])
print('Sum function result =', np.sum(a))
# faster for small array -
# print(np.add.reduce(a))
# but the only reduces dimension by 1. So do this repeatedly. I create a copy of x since I keep reducing it:
x = np.copy(a)
while x.size > 1:
x = np.add.reduce(x)
print('Sum with add.reduce =', x)
So, the above seems like overkill - I assume it's better to just use sum when you don't know the size of your array, and definitely if it's more than one dimension. Does anyone use add.reduce in production code if your array isn't obvious/small? If so, why?
Any comments for code improvisation are welcome.
I don't think I've used np.add.reduce when np.sum or arr.sum would do just as well. Why type something longer for a trivial speedup.
Consider a 1 axis sum on a modest size array:
In [299]: arr = np.arange(10000).reshape(100,10,5,2)
In [300]: timeit np.sum(arr,axis=0).shape
20.1 µs ± 547 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [301]: timeit arr.sum(axis=0).shape
17.6 µs ± 22.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [302]: timeit np.add.reduce(arr,axis=0).shape
18 µs ± 300 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [303]:
arr.sum is fastest. Obviously it beats np.sum because there's one less level of function call. np.add.reduce isn't faster.
The ufunc.reduce has its place, especially for ufunc that don't have the equivalent of sum or prod. (seems that I commented about this recently).
I suspect you'll find more uses of np.add.at or np.add.reduceat than np.add.reduce in SO answers. Those are ufunc constructs that don't have a method equivalent.
Or search for a keyword like keepdims. That's available with all 3 constructs, but almost all examples will be using it with sum, not reduce.
When I was setting up those tests, I stumbled on a difference I wasn't aware of:
In [307]: np.add.reduce(arr).shape # default axis 0
Out[307]: (10, 5, 2)
In [308]: np.sum(arr) # default axis None
Out[308]: 49995000
In [309]: arr.sum()
Out[309]: 49995000

What is the fastest way in Cython to create a new array from an existing array and a variable

Suppose I have an array
from array import array
myarr = array('l', [1, 2, 3])
and a variable:
myvar = 4
what is the fastest way to create a new array:
newarray = array('l', [1, 2, 3, 4])
You can assume all elements are of 'long' type
I have tried to create a new array and use array.append()
not sure if it is fastest. I was thinking of using memoryview like:
malloc(4*sizeof(long))
but I don't know how to copy a shorter array into part of the memoryview. then insert last element into last position.
I am fairly new to Cython. Thanks for any help!
Update:
I compared the following three methods:
Cython: [100000 loops, best of 3: 5.94 µs per loop]
from libc.stdlib cimport malloc
def cappend(long[:] arr, long var, size_t N):
cdef long[:] result = <long[:(N+1)]>malloc((N+1)*sizeof(long))
result.base[:N] = arr
result.base[N] = var
return result
array: [1000000 loops, best of 3: 1.21 µs per loop]
from array import array
import copy
def pyappend(arr, x):
result = copy.copy(arr)
result.append(x)
return result
list append: [1000000 loops, best of 3: 480 ns per loop]
def pylistappend(lst, x):
result = lst[:]
result.append(x)
return result
is there hope to improve the cython part and beat the array one?
Cython gives us more access to the internals of array.array than the "normal" python, so we can utilize it to speed up the code:
almost by factor 7 for your small example (by eliminating most of the overhead).
by factor 2 for larger inputs by eliminating an unnecessary array-copy.
Read on for more details.
It's a little bit unusual to try to optimize a function for such small input, but not without (at least theoretical) interest.
So let's start with your functions as baseline:
a=array('l', [1,2,3])
%timeit pyappend(a, 8)
1.03 µs ± 10.4 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
lst=[1,2,3]
%timeit pylistappend(lst, 8)
279 ns ± 6.03 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
We must be aware: what we are measuring is not the cost of copying but the cost of overhead (python interpreter, calling functions and so on), for example there is no difference whether a has 3 or 5 elements:
a=array('l', range(5))
%timeit pyappend(a, 8)
1.03 µs ± 6.76 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
In the array-version, we have more overhead because we have indirection via copy module, we can try to eliminate that:
def pyappend2(arr, x):
result = array('l',arr)
result.append(x)
return result
%timeit pyappend2(a, 8)
496 ns ± 5.04 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
That is faster. Now let's use cython - this would eliminate the interpreter costs:
%%cython
def cylistappend(lst, x):
result = lst[:]
result.append(x)
return result
%%cython
from cpython cimport array
def cyappend(array.array arr, long long int x):
cdef array.array res = array.array('l', arr)
res.append(x)
return res
%timeit cylistappend(lst, 8)
193 ns ± 12.4 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
%%timeit cyappend(a, 8)
421 ns ± 8.08 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
The cython versions is about 33% faster for list and about 10% faster for array. The constructor array.array() expects an iterable, but we already have an array.array, so we use the functionality from cpython to get access to internals of the array.array object and improve the situation a little:
%%cython
from cpython cimport array
def cyappend2(array.array arr, long long int x):
cdef array.array res = array.copy(arr)
res.append(x)
return res
%timeit cyappend2(a, 8)
305 ns ± 7.25 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
For the next step we need to know how array.array appends elements: Normally, it over-allocates, so append() has amortized cost O(1), however after array.copy the new array is exactly the needed number of elements and the next append invokes reallocation. We need to change that (see here for the description of the used functions):
%%cython
from cpython cimport array
from libc.string cimport memcpy
def cyappend3(array.array arr, long long int x):
cdef Py_ssize_t n=len(arr)
cdef array.array res = array.clone(arr,n+1,False)
memcpy(res.data.as_voidptr, arr.data.as_voidptr, 8*n)#that is pretty sloppy..
res.data.as_longlongs[n]=x
return res
%timeit cyappend3(a, 8)
154 ns ± 1.34 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
Similar to your function, the memory is over-allocated, so we don't need to call resize() any longer. Now we are faster than the list and almost 7 times faster than the original python version.
Let's compare timings for bigger array sizes (a=array('l',range(1000)), lst=list(range(1000)), where copying of the data makes most of the running time:
pyappend 1.84 µs #copy-module is slow!
pyappend2 1.02 µs
cyappend 0.94 µs #cython no big help - we are copying twice
cyappend2 0.90 µs #still copying twice
cyappend3 0.43 µs #copying only once -> twice as fast!
pylistappend 4.09 µs # needs to increment refs of integers
cylistappend 3.85 µs # the same as above
Now, eliminating the unnecessary copy for array.array gives us the expected factor 2.
For even bigger arrays (10000 elements), we see the following:
pyappend 6.9 µs #copy-module is slow!
pyappend2 4.8 µs
cyappend2 4.4 µs
cyappend3 4.4 µs
There is no longer a difference between the versions (if one discards the slow copy-module). The reason for this is the changed behavior of the array.array for such big number of elements: when copying it over-allocates thus avoiding the reallocation after the first append().
We can easily check it:
b=array('l', array('l', range(10**3)))#emulate our functions
b.buffer_info()
[] (94481422849232, 1000)
b.append(1)
b.buffer_info()
[] (94481422860352, 1001) # another pointer address -> reallocated
...
b=array('l', array('l', range(10**4)))
b.buffer_info()
[](94481426290064, 10000)
b.append(33)
b.buffer_info()
[](94481426290064, 10001) # the same pointer address -> no reallocation!

Why is numpy faster at finding non-zero elements in a matrix?

def nonzero(a):
row,colum = a.shape
nonzero_row = np.array([],dtype=int)
nonzero_col = np.array([],dtype=int)
for i in range(0,row):
for j in range(0,colum):
if a[i,j] != 0:
nonzero_row = np.append(nonzero_row,i)
nonzero_col = np.append(nonzero_col,j)
return (nonzero_row,nonzero_col)
The above code is much slower compared to
(row,col) = np.nonzero(edges_canny)
It would be great if I can get any direction how to increase the speed and why numpy functions are much faster?
There are 2 reasons why NumPy functions can outperform Pythons types:
The values inside the array are native types, not Python types. This means NumPy doesn't need to go through the abstraction layer that Python has.
NumPy functions are (mostly) written in C. That actually only matters in some cases because a lot of Python functions are also written in C, for example sum.
In your case you also do something really inefficient: You append to an array. That's one really expensive operation in the middle of a double loop. That's an obvious (and unnecessary) bottleneck right there. You would get amazing speedups just by using lists as nonzero_row and nonzero_col and only convert them to array just before you return:
def nonzero_list_based(a):
row,colum = a.shape
a = a.tolist()
nonzero_row = []
nonzero_col = []
for i in range(0,row):
for j in range(0,colum):
if a[i][j] != 0:
nonzero_row.append(i)
nonzero_col.append(j)
return (np.array(nonzero_row), np.array(nonzero_col))
The timings:
import numpy as np
def nonzero_original(a):
row,colum = a.shape
nonzero_row = np.array([],dtype=int)
nonzero_col = np.array([],dtype=int)
for i in range(0,row):
for j in range(0,colum):
if a[i,j] != 0:
nonzero_row = np.append(nonzero_row,i)
nonzero_col = np.append(nonzero_col,j)
return (nonzero_row,nonzero_col)
arr = np.random.randint(0, 10, (100, 100))
%timeit np.nonzero(arr)
# 315 µs ± 5.39 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit nonzero_original(arr)
# 759 ms ± 12.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit nonzero_list_based(arr)
# 13.1 ms ± 492 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Even though it's 40 times slower than the NumPy operation it's still more than 60 times faster than your approach. There's an important lesson here: Avoid np.append whenever possible!
One additional point why NumPy outperforms alternative approaches is because they (mostly) use state-of-the art approaches (or they "import" them, i.e. BLAS/LAPACK/ATLAS/MKL) to solve the problems. These algorithms have been optimized for correctness and speed over years (if not decades). You shouldn't expect to find a faster or even comparable solution.

Categories

Resources