(memory-)efficient operations between arbitrary columns of numpy array - python

I have a large 2D numpy array. I would like to be able to efficiently run row-wise operations on subsets of the columns, without copying the data.
In what follows,
a = np.arange(1000000).reshape(1000, 10000) and columns = np.arange(1, 1000, 2). For reference,
In [4]: %timeit a.sum(axis=1)
7.26 ms ± 431 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
The approaches I am aware of are:
fancy indexing with list of columns
In [5]: %timeit a[:, columns].sum(axis=1)
42.5 ms ± 197 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
fancy indexing with mask of columns
In [6]: cols_mask = np.zeros(10000, dtype=bool)
...: cols_mask[columns] = True
In [7]: %timeit a[:, cols_mask].sum(axis=1)
42.1 ms ± 302 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
masked array
In [8]: cells_mask = np.ones((1000, 10000), dtype=bool)
In [9]: cells_mask[:, columns] = False
In [10]: am = np.ma.masked_array(a, mask=cells_mask)
In [11]: %timeit am.sum(axis=1)
80 ms ± 2.71 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
python loop
In [12]: %timeit sum([a[:, i] for i in columns])
31.2 ms ± 531 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Somewhat surprisingly to me, the last approach is the most efficient: moreover, it avoids copying the full data, which for me is a prerequisite. However, it is still much slower than the simple sum (on double the data size), and most importantly, it is not trivial to generalize to other operations (e.g., cumsum).
Is there any approach I am missing? I would be fine with writing some cython code, but I would like the approach to work for any numpy function, not just sum.

On this one pythran seems a bit faster than numba at least on my rig:
import numpy as np
#pythran export col_sum(float[:,:], int[:])
#pythran export col_sum(int[:,:], int[:])
def col_sum(data, idx):
return data.T[idx].sum(0)
Compile with pythran <filename.py>
Timings:
timeit(lambda:cs_pythran.col_sum(a, columns),number=1000)
# 1.644187423051335
timeit(lambda:cs_numba.col_sum(a, columns),number=1000)
# 2.635075871949084

If you want to beat c-compiled block summation, you're probably best off with numba. Any indexing that stays in python (numba creates c-compiled functions with jit) is going to have python overhead.
from numba import jit
#jit
def col_sum(block, idx):
return block[:, idx].sum(1)
%timeit a.sum(axis=1)
100 loops, best of 3: 5.25 ms per loop
%timeit a[:, columns].sum(axis=1)
100 loops, best of 3: 7.24 ms per loop
%timeit col_sum(a, columns)
100 loops, best of 3: 2.46 ms per loop

You can use Numba. For best performance it is usually necessary to write simple loops as you would do in C.
(Numba basically a Python to LLVM-IR code translator, quite like Clang for C)
Code
import numpy as np
import numba as nb
#nb.njit(fastmath=True,parallel=True)
def row_sum(arr,columns):
res=np.empty(arr.shape[0],dtype=arr.dtype)
for i in nb.prange(arr.shape[0]):
sum=0.
for j in range(columns.shape[0]):
sum+=arr[i,columns[j]]
res[i]=sum
return res
Timings
a = np.arange(1_000_000).reshape(1_000, 1_000)
columns = np.arange(1, 1000, 2)
%timeit res_1=a[:, columns].sum(axis=1)
1.29 ms ± 8.05 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit res_2=row_sum(a,columns)
59.3 µs ± 4.35 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
np.allclose(res_1,res_2)
True

With Transonic (https://transonic.readthedocs.io), it's easy to write code that can be accelerated by different Python accelerators (in practice Cython, Pythran and Numba).
For example, with the boost decorator, one can write
import numpy as np
from transonic import boost
T0 = "int[:, :]"
T1 = "int[:]"
#boost
def row_sum_loops(arr: T0, columns: T1):
# locals type annotations are used only by Cython
i: int
j: int
sum_: int
res: "int[]" = np.empty(arr.shape[0], dtype=arr.dtype)
for i in range(arr.shape[0]):
sum_ = 0
for j in range(columns.shape[0]):
sum_ += arr[i, columns[j]]
res[i] = sum_
return res
#boost
def row_sum_transpose(arr: T0, columns: T1):
return arr.T[columns].sum(0)
On my computer, I obtain:
TRANSONIC_BACKEND="python" python row_sum_boost.py
Checks passed: results are consistent
Python
row_sum_loops 108.57 s
row_sum_transpose 1.38
TRANSONIC_BACKEND="cython" python row_sum_boost.py
Checks passed: results are consistent
Cython
row_sum_loops 0.45 s
row_sum_transpose 1.32 s
TRANSONIC_BACKEND="numba" python row_sum_boost.py
Checks passed: results are consistent
Numba
row_sum_loops 0.27 s
row_sum_transpose 1.16 s
TRANSONIC_BACKEND="pythran" python row_sum_boost.py
Checks passed: results are consistent
Pythran
row_sum_loops 0.27 s
row_sum_transpose 0.76 s
See https://transonic.readthedocs.io/en/stable/examples/row_sum/txt.html for the full code and a more complete comparison on the example of this question.
Note that Pythran is also very efficient with the transonic.jit decorator.

Related

Adding single integer to numpy array faster if single integer has python-native int type

I add a single integer to an array of integers with 1000 elements. This is faster by 25% when I first cast the single integer from numpy.int64 to the python-native int.
Why? Should I, as a general rule of thumb convert the single number to native python formats for single-number-to-array operations with arrays of about this size?
Note: may be related to my previous question Conjugating a complex number much faster if number has python-native complex type.
import numpy as np
nnu = 10418
nnu_use = 5210
a = np.random.randint(nnu,size=1000)
b = np.random.randint(nnu_use,size=1)[0]
%timeit a + b # --> 3.9 µs ± 19.9 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit a + int(b) # --> 2.87 µs ± 8.07 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Note that the speed-up can be enormous (factor 50) for scalar-to-scalar-operations as well, as seen below:
np.random.seed(100)
a = (np.random.rand(1))[0]
a_native = float(a)
b = complex(np.random.rand(1)+1j*np.random.rand(1))
c = (np.random.rand(1)+1j*np.random.rand(1))[0]
c_native = complex(c)
%timeit a * (b - b.conjugate() * c) # 6.48 µs ± 49.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit a_native * (b - b.conjugate() * c_native) # 283 ns ± 7.78 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
%timeit a * b # 5.07 µs ± 17.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit a_native * b # 94.5 ns ± 0.868 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
Update: Could it be that the latest numpy release fixes the speed difference? The release notes of numpy 1.23 mention that scalar operations are now much faster, see https://numpy.org/devdocs/release/1.23.0-notes.html#performance-improvements-and-changes and https://github.com/numpy/numpy/pull/21188. I am using python 3.7.6, numpy 1.21.2.
On my Windows PC with CPython 3.8.1, I get:
[Old] Numpy 1.22.4:
- First test: 1.65 µs VS 1.43 µs
- Second: 2.03 µs VS 0.17 µs
[New] Numpy 1.23.1:
- First test: 1.38 µs VS 1.24 µs <---- A bit better than Numpy 1.22.4
- Second: 0.38 µs VS 0.17 µs <---- Much better than Numpy 1.22.4
While the new version of Numpy gives a good boost, native type should always be faster than Numpy ones with the (default) CPython interpreter. Indeed, the interpreter needs to call C function of Numpy. This is not needed with native types. Additionally, the Numpy checks and wrapping is not optimal but Numpy is not designed for fast scalar computation in the first place (though the overhead was previously not reasonable). In fact, scalar computations are very inefficient and the interpreter prevent any fast execution.
If you plan to do many scalar operation you need to use a natively compiled code, possibly using Cython, Numba, or even a raw C/C++ module. Note that Cython do not optimize/inline Numpy calls but can operate on native types faster. A native code can do this certainly in one or even two order of magnitude less time.
Note that in the first case, the path in Numpy functions is not the same and Numpy does additional check that are a bit more expensive then the value is not a CPython object. Still, it should be a constant overhead (and now relatively small). Otherwise, it would be a bug (and should be reported).
Related: Why is np.sum(range(N)) very slow?

Most efficient way to multiply a small matrix with a scalar in numpy

I have a program whose main performance bottleneck involves multiplying matrices which have one dimension of size 1 and another large dimension, e.g. 1000:
large_dimension = 1000
a = np.random.random((1,))
b = np.random.random((1, large_dimension))
c = np.matmul(a, b)
In other words, multiplying matrix b with the scalar a[0].
I am looking for the most efficient way to compute this, since this operation is repeated millions of times.
I tested for performance of the two trivial ways to do this, and they are practically equivalent:
%timeit np.matmul(a, b)
>> 1.55 µs ± 45.8 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
%timeit a[0] * b
>> 1.77 µs ± 34.6 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
Is there a more efficient way to compute this?
Note: I cannot move these computations to a GPU since the program is using multiprocessing and many such computations are done in parallel.
large_dimension = 1000
a = np.random.random((1,))
B = np.random.random((1, large_dimension))
%timeit np.matmul(a, B)
5.43 µs ± 22 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit a[0] * B
5.11 µs ± 6.92 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Use just float
%timeit float(a[0]) * B
3.48 µs ± 26.1 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
To avoid memory allocation use "buffer"
buffer = np.empty_like(B)
%timeit np.multiply(float(a[0]), B, buffer)
2.96 µs ± 37.1 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
To avoid unnecessary getting attribute use "alias"
mul = np.multiply
%timeit mul(float(a[0]), B, buffer)
2.73 µs ± 12.6 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
And I don't recommend using numpy scalars at all,
because if you avoid it, computation will be faster
a_float = float(a[0])
%timeit mul(a_float, B, buffer)
1.94 µs ± 5.74 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
Furthermore, if it's possible then initialize buffer out of loop once (of course, if you have something like loop :)
rng = range(1000)
%%timeit
for i in rng:
pass
24.4 µs ± 1.21 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%%timeit
for i in rng:
mul(a_float, B, buffer)
1.91 ms ± 2.21 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
So,
"best_iteration_time" = (1.91 - 0.02) / 1000 => 1.89 (µs)
"speedup" = 5.43 / 1.89 = 2.87
In this case, it is probably faster to work with an element-wise multiplication but the time you see is mostly the overhead of Numpy (calling C functions from the CPython interpreter, wrapping/unwraping types, making checks, doing the operation, array allocations, etc.).
since this operation is repeated millions of times
This is the problem. Indeed, the CPython interpreter is very bad at doing things with a low latency. This is especially true when you work on Numpy types as calling a C code and performing checks for trivial operation is much slower than doing it in pure Python which is also much slower than compiled native C/C++ codes. If you really need this, and you cannot vectorize your code using Numpy (because you have a loop iterating over timesteps), then you move away from using CPython, or at least not a pure Python code. Instead, you can use Numba or Cython to mitigate the impact doing C calls, wrapping types, etc. If this is not enough, then you will need to write a native C/C++ code (or any similar language) unless you find exactly a dedicated Python package doing exactly that for you. Note that Numba is fast only when it works on native types or Numpy arrays (containing native types). If you works with a lot of pure Python types and you do not want to rewrite your code, then you can try the PyPy JIT.
Here is a simple example in Numba avoiding the (costly) creation/allocation of a new array (as well as many Numpy internal checks and calls) that is specifically written to solve your specific case:
#nb.njit('void(float64[::1],float64[:,::1],float64[:,::1])')
def fastMul(a, b, out):
val = a[0]
for i in range(b.shape[1]):
out[0,i] = b[0,i] * val
res = np.empty(b.shape, dtype=b.dtype)
%timeit fastMul(a, b, res)
# 397 ns ± 0.587 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
At the time of writing, this solution is faster than all the others. As most of the time is spent in calling Numba and performing some internal checks, using Numba directly for the function containing the iteration loop should result in an even faster code.
import numpy as np
import numba
def matmult_numpy(matrix, c):
return np.matmul(c, matrix)
#numba.jit(nopython=True)
def matmult_numba(matrix, c):
return c*matrix
if __name__ == "__main__":
large_dimension = 1000
a = np.random.random((1, large_dimension))
c = np.random.random((1,))
About a factor of 3 speedup using Numba. Numba cognoscenti may be able to do better by explicitly casting the parameter "c" as a scalar
Check: The result of
%timeit matmult_numpy(a, c) 2.32 µs ± 50 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit matmult_numba(a, c)
763 ns ± 6.67 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)

Creating an array of numbers that add up to 1 with given length

I'm trying to use different weights for my model and I need those weights add up to 1 like this;
def func(length):
return ['a list of numbers add up to 1 with given length']
func(4) returns [0.1, 0.2, 0.3, 0.4]
The numbers should be linearly spaced and they should not start from 0. Is there any way to achieve this with numpy or scipy?
This can be done quite simply using numpy arrays:
def func(length):
linArr = np.arange(1, length+1)
return linArr/sum(x)
First we create an array of length length ranging from 1 to length. Then we normalize the sum.
Thanks to Paul Panzer for pointing out that the efficiency of this function can be improved by using Gauss's formula for the sum of the first n integers:
def func(length):
linArr = np.arange(1, length+1)
arrSum = length * (length+1) // 2
return linArr/arrSum
For large inputs, you might find that using np.linspace is faster than the accepted answer
def f1(length):
linArr = np.arange(1, length+1)
arrSum = length * (length+1) // 2
return linArr/arrSum
def f2(l):
delta = 2/(l*(l+1))
return np.linspace(delta, l*delta, l)
Ensure that the two things produce the same result:
In [39]: np.allclose(f1(1000000), f2(1000000))
Out[39]: True
Check timing of both:
In [68]: %timeit f1(10000000)
515 ms ± 28.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [69]: %timeit f2(10000000)
247 ms ± 4.57 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
It's tempting to just use np.arange(delta, l*delta, delta) which should be even faster, but this does present the risk of rounding errors causing the array to have lengths different from l (as will happen e.g. for l = 10000000).
If speed is more important than code style, it might also possible to squeeze out a bit more by using Numba:
from numba import jit
#jit
def f3(l):
a = np.empty(l, dtype=np.float64)
delta = 2/(l*(l+1))
for n in range(l):
a[n] = (n+1)*delta
return a
In [96]: %timeit f3(10000000)
216 ms ± 16.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
While we're at it, let's note that it's possible to parallelize this loop. Doing so naively with Numba doesn't appear to give much, but helping it out a bit and pre-splitting the array into num_parallel parts does give further improvement on a quad core system:
from numba import njit, prange
#njit(parallel=True)
def f4(l, num_parallel=4):
a = np.empty(l, dtype=np.float64)
delta = 2/(l*(l+1))
for j in prange(num_parallel):
# The last iteration gets whatever's left from rounding
offset = 0 if j != num_parallel - 1 else l % num_parallel
for n in range(l//num_parallel + offset):
i = j*(l//num_parallel) + n
a[i] = (i+1)*delta
return a
In [171]: %timeit f4(10000000, 4)
163 ms ± 13.2 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [172]: %timeit f4(10000000, 8)
158 ms ± 5.58 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [173]: %timeit f4(10000000, 12)
157 ms ± 8.77 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

Python FOR loop vs. builtin reduce() and mul() functions?

Here I have two functions:
from functools import reduce
from operator import mul
def fn1(arr):
product = 1
for number in arr:
product *= number
return [product//x for x in arr]
def fn2(arr):
product = reduce(mul, arr)
return [product//x for x in arr]
Benchmark:
In [2]: arr = list(range(1,11))
In [3]: %timeit fn1(arr)
1.62 µs ± 23.5 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
In [4]: %timeit fn2(arr)
1.88 µs ± 28.3 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
In [5]: arr = list(range(1,101))
In [6]: %timeit fn1(arr)
38.5 µs ± 190 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [7]: %timeit fn2(arr)
41 µs ± 463 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [8]: arr = list(range(1,1001))
In [9]: %timeit fn1(arr)
4.23 ms ± 25.8 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [10]: %timeit fn2(arr)
4.24 ms ± 36.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [11]: arr = list(range(1,10001))
In [12]: %timeit fn1(arr)
605 ms ± 4.97 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [13]: %timeit fn2(arr)
594 ms ± 4.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Here fn2() is marginally slower with the small lists. My understanding was that reduce() and mul() functions are both builtin functions, therefore they run at C speed and should be faster than the for loop. Probably because I have much more function calls (which also take some time) inside the fn2, it contributes to the end performance? But then the trend shows that fn2() outperforms fn1() with the larger lists. Why?
There could be a lot of reasons for this. First is CPU code execution predictions and compiler optimizations. As for me it shouldn't matter much for you which form of the code is faster if you choose proper algorithm. You need to use one that is suitable for your needs and looks better and leave performance to python compiler. Often faster options cause more problems with memory/readability/support. Also it's not guaranteed that little more complex code inside simple cycles wouldn't change performance, just because some optimizations could be applied.
-- Update --
If you want to increase performance of python on simple operations I would advice you to look at PyPy, Cython, nim which are made to gain performance of C when python type wrappers take too much.
these are always going to be very close, but for somewhat interesting reasons:
if the product is small (e.g. for the range(1,10)) then the functions are doing very little "useful" work, and everything is going into marshalling machine ints between Python objects so the functions can be invoked
if the product is large (e.g. the range(1,10001) case) the numbers become enormous (i.e. thousands of decimal digits) and the majority of time is spent multiplying very large numbers
for example, with Python 3.7.3 (in Linux 5.0.10):
from functools import reduce
from operator import mul
prod = reduce(mul, range(1, 10001))
prod has ~36k digits and consumes ~16KiB --- i.e. check with math.log10(prod) and sys.getsizeof(prod).
working with small (i.e. not bignum) products such as:
reduce(mul, [1] * 10001)
is ~50 times faster on my computer than when we need to use bignums as above. also notice that it's basically the same speed when using floating point numbers as compared to integers, e.g.
reduce(mul, [1.] * 10001)
only takes ~10% more time.
your additional code that makes an additional pass over the array seems to just be complicating issues hence I've ignored it --- microbenchmarks like these are awkward enough to get right!

Quicker way to implement numpy.isin followed by sum

I am performing data analysis using a python script and learned from profiling that more than 95 % of the computation time is taken by the line which performs the following operation np.sum(C[np.isin(A, b)]), where A, C are 2D NumPy arrays of equal dimension m x n, and b is a 1D array of variable length. I am wondering if not a dedicated NumPy function, is there a way to accelerate such computation?
Typical sizes of A (int64), C (float64): 10M x 100
Typical size of b (int64): 1000
As your labels are from a small integer range you should get a sizeable speedup from using np.bincount (pp) below. Alternatively, you can speedup lookup by creating a mask (p2). This---as does your original code---allows for replacing np.sum with math.fsum which guarantees an exact within machine precision result (p3). Alternatively, we can pythranize it for another 40% speedup (p4).
On my rig the numba soln (mx) is about as fast as pp but maybe I'm not doing it right.
import numpy as np
import math
from subsum import pflat
MAXIND = 120_000
def OP():
return sum(C[np.isin(A, b)])
def pp():
return np.bincount(A.reshape(-1), C.reshape(-1), MAXIND)[np.unique(b)].sum()
def p2():
grid = np.zeros(MAXIND, bool)
grid[b] = True
return C[grid[A]].sum()
def p3():
grid = np.zeros(MAXIND, bool)
grid[b] = True
return math.fsum(C[grid[A]])
def p4():
return pflat(A.ravel(), C.ravel(), b, MAXIND)
import numba as nb
#nb.njit(parallel=True,fastmath=True)
def nb_ss(A,C,b):
s=set(b)
sum=0.
for i in nb.prange(A.shape[0]):
for j in range(A.shape[1]):
if A[i,j] in s:
sum+=C[i,j]
return sum
def mx():
return nb_ss(A,C,b)
sh = 100_000, 100
A = np.random.randint(0, MAXIND, sh)
C = np.random.random(sh)
b = np.random.randint(0, MAXIND, 1000)
print(OP(), pp(), p2(), p3(), p4(), mx())
from timeit import timeit
print("OP", timeit(OP, number=4)*250)
print("pp", timeit(pp, number=10)*100)
print("p2", timeit(p2, number=10)*100)
print("p3", timeit(p3, number=10)*100)
print("p4", timeit(p4, number=10)*100)
print("mx", timeit(mx, number=10)*100)
The code for the pythran module:
[subsum.py]
import numpy as np
#pythran export pflat(int[:], float[:], int[:], int)
def pflat(A, C, b, MAXIND):
grid = np.zeros(MAXIND, bool)
grid[b] = True
return C[grid[A]].sum()
Compilation is as simple as pythran subsum.py
Sample run:
41330.15849965791 41330.15849965748 41330.15849965747 41330.158499657475 41330.15849965791 41330.158499657446
OP 1963.3807722493657
pp 53.23419079941232
p2 21.8758742994396
p3 26.829131800332107
p4 12.988955597393215
mx 52.37018179905135
I assume you have changed int64 to int8 wherever required.
You can use Numba's parallel and It feature for faster Numpy computations and makes use of the cores.
#numba.jit(nopython=True, parallel=True)
def (A,B,c):
return np.sum(C[np.isin(A, b)])
Documentation for Numba Parallel
I don't know why np.isin is that slow, but you can implement your function quite a lot faster.
The following Numba solution uses a set for fast lookup of values and is parallelized. The memory footprint is also smaller than in the Numpy implementation.
Code
import numpy as np
import numba as nb
#nb.njit(parallel=True,fastmath=True)
def nb_pp(A,C,b):
s=set(b)
sum=0.
for i in nb.prange(A.shape[0]):
for j in range(A.shape[1]):
if A[i,j] in s:
sum+=C[i,j]
return sum
Timings
The pp implementation and the first data sample is form Paul Panzers answer above.
MAXIND = 120_000
sh = 100_000, 100
A = np.random.randint(0, MAXIND, sh)
C = np.random.random(sh)
b = np.random.randint(0, MAXIND, 1000)
MAXIND = 120_000
%timeit res_1=np.sum(C[np.isin(A, b)])
1.5 s ± 10.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit res_2=pp(A,C,b)
62.5 ms ± 624 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit res_3=nb_pp(A,C,b)
17.1 ms ± 141 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
MAXIND = 10_000_000
%timeit res_1=np.sum(C[np.isin(A, b)])
2.06 s ± 27.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit res_2=pp(A,C,b)
206 ms ± 3.67 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit res_3=nb_pp(A,C,b)
17.6 ms ± 332 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
MAXIND = 100
%timeit res_1=np.sum(C[np.isin(A, b)])
1.01 s ± 20.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit res_2=pp(A,C,b)
46.8 ms ± 538 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit res_3=nb_pp(A,C,b)
3.88 ms ± 84.8 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)

Categories

Resources