I am creating this array for my shader and this step is very slow as it constitutes a nested for loop. Currently this methos takes approx 1 sec to create this. Can anyone suggest any faster method for creating this array.
import numpy as np
elems = []
b = 23503
a = 24
for i in range(0, a - 1):
for j in range(0, b - 1):
elems += [j + b * i, j + b * i + 1, j + b * (i + 1)]
elems += [j + b * (i + 1), j + b * (i + 1) + 1, j + b * i + 1]
elems = np.array(elems, dtype=np.int32)
First I would recognise that there is a lot of repeated computation. The base term involving the iterator variables here is i*b+j, so let's have NumPy create an array that contains those values in the order they should appear:
ib_j = (np.arange(a-1)[:, None]*b + np.arange(b-1)).flatten()
Next we compute the six different columns from this base, stack them horizontally, and flatten:
def create_shader_array(a, b):
ib_j = (np.arange(a-1)[:, None]*b + np.arange(b-1)).flatten()
return np.column_stack((ib_j, ib_j+1, ib_j+b, ib_j+b, ib_j+b+1, ib_j+1)).flatten()
Validation:
>>> all(create_shader_array(a, b) == AKS(a, b)) # AKS is your original implementation
True
Timing:
>>> %timeit AKS(24, 23503)
1.02 s ± 8.25 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
>>> %timeit create_shader_array(24, 23503)
28.8 ms ± 364 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
You can use meshgrid to cover the i and j iterations and then to an outer add to get the inner shading. Using ravel in the end to get a 1D array.
inner = np.array([0, 1, b, b, b+1, 1], dtype="int32")
j, i = np.meshgrid(np.arange(b-1), np.arange(a-1))
elems = np.add.outer((j+b*i), inner).ravel()
or with a one-liner:
elems = ([0, 1, b, b, b+1, 1]+np.arange(b-1)[:, None]+b*np.arange(a-1)[:,None, None]).ravel()
Finishes in <6ms on my computer
In [9]: %timeit ([0, 1, b, b, b+1, 1]+np.arange(b-1)[:,None]+b*np.arange(a-1)[:
...: ,None, None]).ravel()
5.23 ms ± 112 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [10]: %timeit create_shader_array(a, b)
29.8 ms ± 176 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Related
To be clear, below is what I am trying to do. And the question is, how can I change the function oper_AB() so that instead of the nested for loop, I utilize the vectorization/broadcasting in numpy and get to the ret_list much faster?
def oper(a_1D, b_1D):
return np.dot(a_1D, b_1D) / np.dot(b_1D, b_1D)
def oper_AB(A_2D, B_2D):
ret_list = []
for a_1D in A_2D:
for b_1D in B_2D:
ret_list.append(oper(a_1D, b_1D))
return ret_list
Strictly addressing the question (with the reservation that I suspect the OP wants the norm, not the norm squared, as divisor below):
r = a # b.T / np.linalg.norm(b, axis=1)**2
Example:
np.random.seed(0)
a = np.random.randint(0, 10, size=(2,2))
b = np.random.randint(0, 10, size=(2,2))
Then:
>>> a
array([[5, 0],
[3, 3]])
>>> b
array([[7, 9],
[3, 5]])
>>> oper_AB(a, b)
[0.2692307692307692,
0.4411764705882353,
0.36923076923076925,
0.7058823529411765]
>>> a # b.T / np.linalg.norm(b, axis=1)**2
array([[0.26923077, 0.44117647],
[0.36923077, 0.70588235]])
>>> np.ravel(a # b.T / np.linalg.norm(b, axis=1)**2)
array([0.26923077, 0.44117647, 0.36923077, 0.70588235])
Speed:
n, m = 1000, 100
a = np.random.uniform(size=(n, m))
b = np.random.uniform(size=(n, m))
orig = %timeit -o oper_AB(a, b)
# 2.73 s ± 11 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
new = %timeit -o np.ravel(a # b.T / np.linalg.norm(b, axis=1)**2)
# 2.22 ms ± 33.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
orig.average / new.average
# 1228.78 (speedup)
Our solution is 1200x faster than the original.
Correctness:
>>> np.allclose(np.ravel(a # b.T / np.linalg.norm(b, axis=1)**2), oper_AB(a, b))
True
Speed on large array, comparison to #Ahmed AEK's solution:
n, m = 2000, 2000
a = np.random.uniform(size=(n, m))
b = np.random.uniform(size=(n, m))
new = %timeit -o np.ravel(a # b.T / np.linalg.norm(b, axis=1)**2)
# 86.5 ms ± 484 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
other = %timeit -o AEK(a, b) # Ahmed AEK's answer
# 102 ms ± 379 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Our solution is 15% faster :-)
this should work.
result = (np.matmul(A_2D, B_2D.transpose())/np.sum(B_2D*B_2D,axis=1)).flatten()
but this second implementation will be faster because of cache utilization.
def oper_AB(A_2D, B_2D):
b_squared = np.sum(B_2D*B_2D,axis=1).reshape([-1,1])
b_normalized = B_2D/b_squared
del b_squared
returned_val = np.matmul(A_2D,b_normalized.transpose())
return returned_val.flatten()
the del is there just if the memory allocated by B_2D is too big, (or it's just me being used to working with multiple GB arrays)
Edit: as requested for A_1D - B_1D
def oper2_AB(A_2D, B_2D):
output = np.zeros([A_2D.shape[0]*B_2D.shape[0],A_2D.shape[1]],dtype=A_2D.dtype)
for i in range(len(A_2D)):
output[i*len(B_2D):(i+1)*len(B_2D)] = A_2D[i]-B_2D
return output
I have an N-by-M array, at each entry of whom, I need to do some NumPy operations and put the result there.
Right now, I'm doing it the naive way with a double loop:
import numpy as np
N = 10
M = 11
K = 100
result = np.zeros((N, M))
is_relevant = np.random.rand(N, M, K) > 0.5
weight = np.random.rand(3, 3, K)
values1 = np.random.rand(3, 3, K)
values2 = np.random.rand(3, 3, K)
for i in range(N):
for j in range(M):
selector = is_relevant[i, j, :]
result[i, j] = np.sum(
np.multiply(
np.multiply(
values1[..., selector],
values2[..., selector]
), weight[..., selector]
)
)
Since all the in-loop operations are simply NumPy operations, I think there must be a way to do this faster or loop-free.
We can use a combination of np.einsum and np.tensordot -
a = np.einsum('ijk,ijk,ijk->k',values1,values2,weight)
out = np.tensordot(a,is_relevant,axes=(0,2))
Alternatively, with one einsum call -
np.einsum('ijk,ijk,ijk,lmk->lm',values1,values2,weight,is_relevant)
And with np.dot and einsum -
is_relevant.dot(np.einsum('ijk,ijk,ijk->k',values1,values2,weight))
Also, play around with the optimize flag in np.einsum by setting it as True to use BLAS.
Timings -
In [146]: %%timeit
...: a = np.einsum('ijk,ijk,ijk->k',values1,values2,weight)
...: out = np.tensordot(a,is_relevant,axes=(0,2))
10000 loops, best of 3: 121 µs per loop
In [147]: %timeit np.einsum('ijk,ijk,ijk,lmk->lm',values1,values2,weight,is_relevant)
1000 loops, best of 3: 851 µs per loop
In [148]: %timeit np.einsum('ijk,ijk,ijk,lmk->lm',values1,values2,weight,is_relevant,optimize=True)
1000 loops, best of 3: 347 µs per loop
In [156]: %timeit is_relevant.dot(np.einsum('ijk,ijk,ijk->k',values1,values2,weight))
10000 loops, best of 3: 58.6 µs per loop
Very large arrays
For very large arrays, we can leverage numexpr to make use of multi-cores -
import numexpr as ne
a = np.einsum('ijk,ijk,ijk->k',values1,values2,weight)
out = np.empty((N, M))
for i in range(N):
for j in range(M):
out[i,j] = ne.evaluate('sum(is_relevant_ij*a)',{'is_relevant_ij':is_relevant[i,j], 'a':a})
Another very simple option is just:
result = (values1 * values2 * weight * is_relevant[:, :, np.newaxis, np.newaxis]).sum((2, 3, 4))
Divakar's last solution is faster than this though. Timings for comparison:
%timeit np.tensordot(np.einsum('ijk,ijk,ijk->k',values1,values2,weight),is_relevant,axes=(0,2))
# 30.9 µs ± 1.71 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit np.einsum('ijk,ijk,ijk,lmk->lm',values1,values2,weight,is_relevant)
# 379 µs ± 486 ns per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit np.einsum('ijk,ijk,ijk,lmk->lm',values1,values2,weight,is_relevant,optimize=True)
# 145 µs ± 1.89 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit is_relevant.dot(np.einsum('ijk,ijk,ijk->k',values1,values2,weight))
# 15 µs ± 124 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit (values1 * values2 * weight * is_relevant[:, :, np.newaxis, np.newaxis]).sum((2, 3, 4))
# 152 µs ± 1.4 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
I have a square matrix that is NxN (N is usually >500). It is constructed using a numpy array.
I need to extract a new matrix that has the i-th column and row removed from this matrix. The new matrix is (N-1)x(N-1).
I am currently using the following code to extract this matrix:
new_mat = np.delete(old_mat,idx_2_remove,0)
new_mat = np.delete(old_mat,idx_2_remove,1)
I have also tried to use:
row_indices = [i for i in range(0,idx_2_remove)]
row_indices += [i for i in range(idx_2_remove+1,N)]
col_indices = row_indices
rows = [i for i in row_indices for j in col_indices]
cols = [j for i in row_indices for j in col_indices]
old_mat[(rows, cols)].reshape(len(row_indices), len(col_indices))
But I found this is slower than using np.delete() in the former. The former is still quite slow for my application.
Is there a faster way to accomplish what I want?
Edit 1:
It seems the following is even faster than the above two, but not by much:
new_mat = old_mat[row_indices,:][:,col_indices]
Here are 3 alternatives I quickly wrote:
Repeated delete:
def foo1(arr, i):
return np.delete(np.delete(arr, i, axis=0), i, axis=1)
Maximal use of slicing (may need some edge checks):
def foo2(arr,i):
N = arr.shape[0]
res = np.empty((N-1,N-1), arr.dtype)
res[:i, :i] = arr[:i, :i]
res[:i, i:] = arr[:i, i+1:]
res[i:, :i] = arr[i+1:, :i]
res[i:, i:] = arr[i+1:, i+1:]
return res
Advanced indexing:
def foo3(arr,i):
N = arr.shape[0]
idx = np.r_[:i,i+1:N]
return arr[np.ix_(idx, idx)]
Test that they work:
In [874]: x = np.arange(100).reshape(10,10)
In [875]: np.allclose(foo1(x,5),foo2(x,5))
Out[875]: True
In [876]: np.allclose(foo1(x,5),foo3(x,5))
Out[876]: True
Compare timings:
In [881]: timeit foo1(arr,100).shape
4.98 ms ± 190 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [882]: timeit foo2(arr,100).shape
526 µs ± 1.57 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [883]: timeit foo3(arr,100).shape
2.21 ms ± 112 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
So the slicing is fastest, even if the code is longer. It looks like np.delete works like foo3, but one dimension at a time.
This is a rather simple operation, but it is repeated millions of times in my actual code and, if possible, I'd like to improve its performance.
import numpy as np
# Initial data array
xx = np.random.uniform(0., 1., (3, 14, 1))
# Coefficients used to modify 'xx'
a, b, c = np.random.uniform(0., 1., 3)
# Operation on 'xx' to obtain the final array 'yy'
yy = xx[0] * a * b + xx[1] * b + xx[2] * c
The last line is the one I'd like to improve. Basically, each term in xx is multiplied by a factor (given by the a, b, c coefficients) and then all terms are added to give a final yy array with the shape (14, 1) vs the shape of the initial xx array (3, 14, 1).
Is it possible to do this via numpy broadcasting?
We could use broadcasted multiplication and then sum along the first axis for the first alternative.
As the second one, we could also bring in matrix-multiplication with np.dot. Thus, giving us two more approaches. Here's the timings for the sample provided in the question -
# Original one
In [81]: %timeit xx[0] * a * b + xx[1] * b + xx[2] * c
100000 loops, best of 3: 5.04 µs per loop
# Proposed alternative #1
In [82]: %timeit (xx *np.array([a*b,b,c])[:,None,None]).sum(0)
100000 loops, best of 3: 4.44 µs per loop
# Proposed alternative #2
In [83]: %timeit np.array([a*b,b,c]).dot(xx[...,0])[:,None]
1000000 loops, best of 3: 1.51 µs per loop
This is similar to Divakar's answer. Swap the first and the third axis of xx and do dot product.
import numpy as np
# Initial data array
xx = np.random.uniform(0., 1., (3, 14, 1))
# Coefficients used to modify 'xx'
a, b, c = np.random.uniform(0., 1., 3)
def op():
yy = xx[0] * a * b + xx[1] * b + xx[2] * c
return yy
def tai():
d = np.array([a*b, b, c])
return np.swapaxes(np.swapaxes(xx, 0, 2).dot(d), 0, 1)
def Divakar():
# improvement given by Divakar
np.array([a*b,b,c]).dot(xx.swapaxes(0,1))
%timeit op()
7.21 µs ± 222 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit tai()
4.06 µs ± 140 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit Divakar()
3 µs ± 105 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Say I have a matrix A of dimension N by M.
I wish to return an N dimensional vector V where the nth element is the double sum of all pairwise product of the entries in the nth row of A.
In loops, I guess I could do:
V = np.zeros(A.shape[0])
for n in range(A.shape[0]):
for i in range(A.shape[1]):
for j in range(A.shape[1]):
V[n] += A[n,i] * A[n,j]
I want to vectorise this and I guess I could do:
V_temp = np.einsum('ij,ik->ijk', A, A)
V = np.einsum('ijk->i', A)
But I don't think this is very memory efficient way as the intermediate step V_temp is unnecessarily storing the whole outer products when all I need are sums. Is there a better way to do this?
Thanks
You can use
V=np.einsum("ni,nj->n",A,A)
You are actually calculating
A.sum(-1)**2
In other words, the sum over an outer product is just the product of the sums of the factors.
Demo:
A = np.random.random((1000,1000))
np.allclose(np.einsum('ij,ik->i', A, A), A.sum(-1)**2)
# True
t = timeit.timeit('np.einsum("ij,ik->i",A,A)', globals=dict(A=A,np=np), number=10)*100; f"{t:8.4f} ms"
# '948.4210 ms'
t = timeit.timeit('A.sum(-1)**2', globals=dict(A=A,np=np), number=10)*100; f"{t:8.4f} ms"
# ' 0.7396 ms'
Perhaps you can use
np.einsum('ij,ik->i', A, A)
or the equivalent
np.einsum(A, [0,1], A, [0,2], [0])
On a 2015 Macbook, I get
In [35]: A = np.random.rand(100,100)
In [37]: %timeit for_loops(A)
640 ms ± 24.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [38]: %timeit np.einsum('ij,ik->i', A, A)
658 µs ± 7.25 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [39]: %timeit np.einsum(A, [0,1], A, [0,2], [0])
672 µs ± 19.3 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)