numpy, do multiple operations cause intermediate arrays to be created? - python

I'm wondering if these two are equivalent:
import numpy as np
a = np.arange(100000) + 1
# 1
b = 10 * np.log10(a)
# 2
c = np.empty_like(a)
c = np.multiply(10, np.log10(a, out=c), out=c)
More precisely, I wonder if numpy found some way to create array b without having an intermediate array that needs to be allocated and thrown away later for the result of the log operation. Of course, this only matters for very big arrays.

In terms of computing time, they seem to be roughly similar, although the first version is a bit better:
In [1]: import numpy as np
In [2]: a = np.arange(100_000, dtype=float) + 1
In [3]: def f(a):
...: b = 10 * np.log10(a)
...:
In [4]: def g(a):
...: c = np.empty_like(a)
...: c = np.multiply(10, np.log10(a, out=c), out=c)
...:
In [5]: %timeit f(a)
759 µs ± 52.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [6]: %timeit g(a)
877 µs ± 39.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

Related

Multiplying integers by booleans and understanding numpy array comparison

I'm writing a program in which there is a numpy array a whose elements can take three possible values: -1, 0 or 1. I am trying to multiply some of its elements by a number c. The idea is to obtain this behaviour:
for i,el in enumerate(a):
if el == b:
a[i] *= c
I came up with a solution that does not require any loops and works a couple of orders of magnitude faster than the previous one, this is the code I used to test them:
# Long array with random integers between -1 and 1
a = np.random.choice(3,1000000) - 1
a1 = a.copy()
a2 = a.copy()
# Reference values for b and c
b = 1
c = 10
# Solution with loop
t0 = time.time()
for i,el in enumerate(a1):
if el == b:
a1[i] *= c
t1 = time.time()
# Solution without loop
a2 = a2*((a2 == b)*c + (a2 != b))
t2 = time.time()
print("No loop: %f s"%(t1 - t0))
print("Loop: %f s"%(t2 - t1))
Although it seems to be working fine I'm not really happy with multiplying integers by booleans, but I don't know if I should, so I would appreciate if anyone could tell me a bit more about what is Numpy doing and/or if there is a better way to do this that I am not considering.
Thanks in advance!
NumPy will cast the bool type to the integer type, with False and True converted to 0 and 1 respectively. This casting is safe, so don't worry, be happy.
In [8]: np.can_cast(np.bool8, np.intc)
Out[8]: True
If you prefer to be explicit, you could do that casting yourself by replacing (a2 == b) with (a2 == b).astype(int), but that is not necessary.
Some comparative timings:
In [66]: %%timeit a2=a.copy()
...: a2*((a2==b)*10 + (a2!=b))
14.4 ms ± 36.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [67]: %%timeit a2=a.copy()
...: a2[a2==b] *= 10
1.96 ms ± 75 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [68]: %%timeit a2=a.copy()
...: a2[a2==b] = a2[a2==b]*10
3.28 ms ± 5.63 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [69]: %%timeit a2=a.copy()
...: np.multiply(a2, 10, where=a2==b, out=a2)
1.63 ms ± 3.38 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
The fastest ones only do one a2==b test. The multiply with where parameter is fastest, but also bit harder to understand.
And to verify that the fastest produces the same thing:
In [73]: a2=a.copy();a2=a2*((a2==b)*10 + (a2!=b))
In [74]: a3=a.copy();np.multiply(a3, 10, where=a3==b, out=a3);
In [75]: np.allclose(a2,a3)
Out[75]: True

Given two 2D numpy arrays A and B, how to efficiently apply a function that takes two 1D arrays to each combination of rows of A and B?

To be clear, below is what I am trying to do. And the question is, how can I change the function oper_AB() so that instead of the nested for loop, I utilize the vectorization/broadcasting in numpy and get to the ret_list much faster?
def oper(a_1D, b_1D):
return np.dot(a_1D, b_1D) / np.dot(b_1D, b_1D)
def oper_AB(A_2D, B_2D):
ret_list = []
for a_1D in A_2D:
for b_1D in B_2D:
ret_list.append(oper(a_1D, b_1D))
return ret_list
Strictly addressing the question (with the reservation that I suspect the OP wants the norm, not the norm squared, as divisor below):
r = a # b.T / np.linalg.norm(b, axis=1)**2
Example:
np.random.seed(0)
a = np.random.randint(0, 10, size=(2,2))
b = np.random.randint(0, 10, size=(2,2))
Then:
>>> a
array([[5, 0],
[3, 3]])
>>> b
array([[7, 9],
[3, 5]])
>>> oper_AB(a, b)
[0.2692307692307692,
0.4411764705882353,
0.36923076923076925,
0.7058823529411765]
>>> a # b.T / np.linalg.norm(b, axis=1)**2
array([[0.26923077, 0.44117647],
[0.36923077, 0.70588235]])
>>> np.ravel(a # b.T / np.linalg.norm(b, axis=1)**2)
array([0.26923077, 0.44117647, 0.36923077, 0.70588235])
Speed:
n, m = 1000, 100
a = np.random.uniform(size=(n, m))
b = np.random.uniform(size=(n, m))
orig = %timeit -o oper_AB(a, b)
# 2.73 s ± 11 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
new = %timeit -o np.ravel(a # b.T / np.linalg.norm(b, axis=1)**2)
# 2.22 ms ± 33.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
orig.average / new.average
# 1228.78 (speedup)
Our solution is 1200x faster than the original.
Correctness:
>>> np.allclose(np.ravel(a # b.T / np.linalg.norm(b, axis=1)**2), oper_AB(a, b))
True
Speed on large array, comparison to #Ahmed AEK's solution:
n, m = 2000, 2000
a = np.random.uniform(size=(n, m))
b = np.random.uniform(size=(n, m))
new = %timeit -o np.ravel(a # b.T / np.linalg.norm(b, axis=1)**2)
# 86.5 ms ± 484 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
other = %timeit -o AEK(a, b) # Ahmed AEK's answer
# 102 ms ± 379 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Our solution is 15% faster :-)
this should work.
result = (np.matmul(A_2D, B_2D.transpose())/np.sum(B_2D*B_2D,axis=1)).flatten()
but this second implementation will be faster because of cache utilization.
def oper_AB(A_2D, B_2D):
b_squared = np.sum(B_2D*B_2D,axis=1).reshape([-1,1])
b_normalized = B_2D/b_squared
del b_squared
returned_val = np.matmul(A_2D,b_normalized.transpose())
return returned_val.flatten()
the del is there just if the memory allocated by B_2D is too big, (or it's just me being used to working with multiple GB arrays)
Edit: as requested for A_1D - B_1D
def oper2_AB(A_2D, B_2D):
output = np.zeros([A_2D.shape[0]*B_2D.shape[0],A_2D.shape[1]],dtype=A_2D.dtype)
for i in range(len(A_2D)):
output[i*len(B_2D):(i+1)*len(B_2D)] = A_2D[i]-B_2D
return output

Creating an array of numbers that add up to 1 with given length

I'm trying to use different weights for my model and I need those weights add up to 1 like this;
def func(length):
return ['a list of numbers add up to 1 with given length']
func(4) returns [0.1, 0.2, 0.3, 0.4]
The numbers should be linearly spaced and they should not start from 0. Is there any way to achieve this with numpy or scipy?
This can be done quite simply using numpy arrays:
def func(length):
linArr = np.arange(1, length+1)
return linArr/sum(x)
First we create an array of length length ranging from 1 to length. Then we normalize the sum.
Thanks to Paul Panzer for pointing out that the efficiency of this function can be improved by using Gauss's formula for the sum of the first n integers:
def func(length):
linArr = np.arange(1, length+1)
arrSum = length * (length+1) // 2
return linArr/arrSum
For large inputs, you might find that using np.linspace is faster than the accepted answer
def f1(length):
linArr = np.arange(1, length+1)
arrSum = length * (length+1) // 2
return linArr/arrSum
def f2(l):
delta = 2/(l*(l+1))
return np.linspace(delta, l*delta, l)
Ensure that the two things produce the same result:
In [39]: np.allclose(f1(1000000), f2(1000000))
Out[39]: True
Check timing of both:
In [68]: %timeit f1(10000000)
515 ms ± 28.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [69]: %timeit f2(10000000)
247 ms ± 4.57 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
It's tempting to just use np.arange(delta, l*delta, delta) which should be even faster, but this does present the risk of rounding errors causing the array to have lengths different from l (as will happen e.g. for l = 10000000).
If speed is more important than code style, it might also possible to squeeze out a bit more by using Numba:
from numba import jit
#jit
def f3(l):
a = np.empty(l, dtype=np.float64)
delta = 2/(l*(l+1))
for n in range(l):
a[n] = (n+1)*delta
return a
In [96]: %timeit f3(10000000)
216 ms ± 16.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
While we're at it, let's note that it's possible to parallelize this loop. Doing so naively with Numba doesn't appear to give much, but helping it out a bit and pre-splitting the array into num_parallel parts does give further improvement on a quad core system:
from numba import njit, prange
#njit(parallel=True)
def f4(l, num_parallel=4):
a = np.empty(l, dtype=np.float64)
delta = 2/(l*(l+1))
for j in prange(num_parallel):
# The last iteration gets whatever's left from rounding
offset = 0 if j != num_parallel - 1 else l % num_parallel
for n in range(l//num_parallel + offset):
i = j*(l//num_parallel) + n
a[i] = (i+1)*delta
return a
In [171]: %timeit f4(10000000, 4)
163 ms ± 13.2 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [172]: %timeit f4(10000000, 8)
158 ms ± 5.58 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [173]: %timeit f4(10000000, 12)
157 ms ± 8.77 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

Quicker way to implement numpy.isin followed by sum

I am performing data analysis using a python script and learned from profiling that more than 95 % of the computation time is taken by the line which performs the following operation np.sum(C[np.isin(A, b)]), where A, C are 2D NumPy arrays of equal dimension m x n, and b is a 1D array of variable length. I am wondering if not a dedicated NumPy function, is there a way to accelerate such computation?
Typical sizes of A (int64), C (float64): 10M x 100
Typical size of b (int64): 1000
As your labels are from a small integer range you should get a sizeable speedup from using np.bincount (pp) below. Alternatively, you can speedup lookup by creating a mask (p2). This---as does your original code---allows for replacing np.sum with math.fsum which guarantees an exact within machine precision result (p3). Alternatively, we can pythranize it for another 40% speedup (p4).
On my rig the numba soln (mx) is about as fast as pp but maybe I'm not doing it right.
import numpy as np
import math
from subsum import pflat
MAXIND = 120_000
def OP():
return sum(C[np.isin(A, b)])
def pp():
return np.bincount(A.reshape(-1), C.reshape(-1), MAXIND)[np.unique(b)].sum()
def p2():
grid = np.zeros(MAXIND, bool)
grid[b] = True
return C[grid[A]].sum()
def p3():
grid = np.zeros(MAXIND, bool)
grid[b] = True
return math.fsum(C[grid[A]])
def p4():
return pflat(A.ravel(), C.ravel(), b, MAXIND)
import numba as nb
#nb.njit(parallel=True,fastmath=True)
def nb_ss(A,C,b):
s=set(b)
sum=0.
for i in nb.prange(A.shape[0]):
for j in range(A.shape[1]):
if A[i,j] in s:
sum+=C[i,j]
return sum
def mx():
return nb_ss(A,C,b)
sh = 100_000, 100
A = np.random.randint(0, MAXIND, sh)
C = np.random.random(sh)
b = np.random.randint(0, MAXIND, 1000)
print(OP(), pp(), p2(), p3(), p4(), mx())
from timeit import timeit
print("OP", timeit(OP, number=4)*250)
print("pp", timeit(pp, number=10)*100)
print("p2", timeit(p2, number=10)*100)
print("p3", timeit(p3, number=10)*100)
print("p4", timeit(p4, number=10)*100)
print("mx", timeit(mx, number=10)*100)
The code for the pythran module:
[subsum.py]
import numpy as np
#pythran export pflat(int[:], float[:], int[:], int)
def pflat(A, C, b, MAXIND):
grid = np.zeros(MAXIND, bool)
grid[b] = True
return C[grid[A]].sum()
Compilation is as simple as pythran subsum.py
Sample run:
41330.15849965791 41330.15849965748 41330.15849965747 41330.158499657475 41330.15849965791 41330.158499657446
OP 1963.3807722493657
pp 53.23419079941232
p2 21.8758742994396
p3 26.829131800332107
p4 12.988955597393215
mx 52.37018179905135
I assume you have changed int64 to int8 wherever required.
You can use Numba's parallel and It feature for faster Numpy computations and makes use of the cores.
#numba.jit(nopython=True, parallel=True)
def (A,B,c):
return np.sum(C[np.isin(A, b)])
Documentation for Numba Parallel
I don't know why np.isin is that slow, but you can implement your function quite a lot faster.
The following Numba solution uses a set for fast lookup of values and is parallelized. The memory footprint is also smaller than in the Numpy implementation.
Code
import numpy as np
import numba as nb
#nb.njit(parallel=True,fastmath=True)
def nb_pp(A,C,b):
s=set(b)
sum=0.
for i in nb.prange(A.shape[0]):
for j in range(A.shape[1]):
if A[i,j] in s:
sum+=C[i,j]
return sum
Timings
The pp implementation and the first data sample is form Paul Panzers answer above.
MAXIND = 120_000
sh = 100_000, 100
A = np.random.randint(0, MAXIND, sh)
C = np.random.random(sh)
b = np.random.randint(0, MAXIND, 1000)
MAXIND = 120_000
%timeit res_1=np.sum(C[np.isin(A, b)])
1.5 s ± 10.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit res_2=pp(A,C,b)
62.5 ms ± 624 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit res_3=nb_pp(A,C,b)
17.1 ms ± 141 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
MAXIND = 10_000_000
%timeit res_1=np.sum(C[np.isin(A, b)])
2.06 s ± 27.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit res_2=pp(A,C,b)
206 ms ± 3.67 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit res_3=nb_pp(A,C,b)
17.6 ms ± 332 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
MAXIND = 100
%timeit res_1=np.sum(C[np.isin(A, b)])
1.01 s ± 20.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit res_2=pp(A,C,b)
46.8 ms ± 538 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit res_3=nb_pp(A,C,b)
3.88 ms ± 84.8 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)

Performance of numpy.insert dependant from array size - workaround?

Using the following code, I get the impression that the insert into a numpy array is dependant from the array size.
Are there any numpy based workarounds for this performance limit (or also non numpy based)?
if True:
import numpy as np
import datetime
import timeit
myArray = np.empty((0, 2), dtype='object')
myString = "myArray = np.insert(myArray, myArray.shape[0], [[ds, runner]], axis=0)"
runner = 1
ds = datetime.datetime.utcfromtimestamp(runner)
% timeit myString
19.3 ns ± 0.715 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
for runner in range(30_000):
ds = datetime.datetime.utcfromtimestamp(runner)
myArray = np.insert(myArray, myArray.shape[0], [[ds, runner]], axis=0)
print("len(myArray):", len(myArray))
% timeit myString
len(myArray): 30000
38.1 ns ± 1.1 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
This has to do with the way numpy works. For each insert operation, it takes the whole array and stores it in a new place. I would recommend using list append and convert it then to a numpy array. Maybe duplicate of this question
Your approach:
In [18]: arr = np.array([])
In [19]: for i in range(1000):
...: arr = np.insert(arr, arr.shape[0],[1,2,3])
...:
In [20]: arr.shape
Out[20]: (3000,)
In [21]: %%timeit
...: arr = np.array([])
...: for i in range(1000):
...: arr = np.insert(arr, arr.shape[0],[1,2,3])
...:
31.9 ms ± 194 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Compare that with concatenate:
In [22]: %%timeit
...: arr = np.array([])
...: for i in range(1000):
...: arr = np.concatenate((arr, [1,2,3]))
...:
5.49 ms ± 20.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
and with a list extend:
In [23]: %%timeit
...: alist = []
...: for i in range(1000):
...: alist.extend([1,2,3])
...: arr = np.array(alist)
384 µs ± 13.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
We discourage the use of concatenate (or np.append) because it is slow, and can be hard to initialize. List append, or extend, is faster. Your use of insert is even worse than concatenate.
concatenate makes a whole new array each time. insert does so as well, but because it's designed to put the new values anywhere in the original, it is much more complicated, and hence slower. Look at its code if you don't believe me.
lists are designed for growth; new items are added via a simple object (pointer) insertion into a buffer that has growth growth. That is, the growth takes occurs in-place.
Insertion into a full array is also pretty good:
In [27]: %%timeit
...: arr = np.zeros((1000,3),int)
...: for i in range(1000):
...: arr[i,:] = [1,2,3]
...: arr = arr.ravel()
1.69 ms ± 9.47 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

Categories

Resources