I've been checking out how to vectorize an outer and inner for loop. These have some calculations and also a delete inside them - that seems to make it much less straight forward.
How would this be vectorized best?
import numpy as np
flattenedArray = np.ndarray.tolist(someNumpyArray)
#flattenedArray is a python list of lists.
c = flattenedArray[:]
for a in range (len(flattenedArray)):
for b in range(a+1, len(flattenedArray)):
if a == b:
continue
i0 = flattenedArray[a][0]
j0 = flattenedArray[a][1]
z0 = flattenedArray[a][2]
i1 = flattenedArray[b][0]
i2 = flattenedArray[b][1]
z1 = flattenedArray[b][2]
if ((np.square(z0-z1)) <= (np.square(i0-i1) + (np.square(j0-j2)))):
if (np.square(i0-i1) + (np.square(j0-j1))) <= (np.square(z0+z1)):
c.remove(flattenedArray[b])
#MSeifert is, of course, as so often right. So the following full vectorisation is only to show "how it's done"
import numpy as np
N = 4
data = np.random.random((N, 3))
# vectorised code
j, i = np.tril_indices(N, -1) # chose tril over triu to have contiguous columns
# useful later
sqsum = np.square(data[i,0]-data[j,0]) + np.square(data[i,1]-data[j,1])
cond = np.square(data[i, 2] + data[j, 2]) >= sqsum
cond &= np.square(data[i, 2] - data[j, 2]) <= sqsum
# because equal 'b's are grouped together we can use reduceat:
cond = np.r_[False, np.logical_or.reduceat(
cond, np.add.accumulate(np.arange(N-1)))]
left = data[~cond, :]
# original code (modified to make it run)
flattenedArray = np.ndarray.tolist(data)
#flattenedArray is a python list of lists.
c = flattenedArray[:]
for a in range (len(flattenedArray)):
for b in range(a+1, len(flattenedArray)):
if a == b:
continue
i0 = flattenedArray[a][0]
j0 = flattenedArray[a][1]
z0 = flattenedArray[a][2]
i1 = flattenedArray[b][0]
j1 = flattenedArray[b][1]
z1 = flattenedArray[b][2]
if ((np.square(z0-z1)) <= (np.square(i0-i1) + (np.square(j0-j1)))):
if (np.square(i0-i1) + (np.square(j0-j1))) <= (np.square(z0+z1)):
try:
c.remove(flattenedArray[b])
except:
pass
# check they are the same
print(np.alltrue(c == left))
Vectorizing the inner loop isn't much of a problem if you work with a mask:
import numpy as np
# I'm using a random array
flattenedArray = np.random.randint(0, 100, (10, 3))
mask = np.zeros(flattenedArray.shape[0], bool)
for idx, row in enumerate(flattenedArray):
# Calculate the broadcasted elementwise addition/subtraction of this row
# with all following
added_squared = np.square(row[None, :] + flattenedArray[idx+1:])
subtracted_squared = np.square(row[None, :] - flattenedArray[idx+1:])
# Check the conditions
col1_col2_added = subtracted_squared[:, 0] + subtracted_squared[:, 1]
cond1 = subtracted_squared[:, 2] <= col1_col2_added
cond2 = col1_col2_added <= added_squared[:, 2]
# Update the mask
mask[idx+1:] |= cond1 & cond2
# Apply the mask
flattenedArray[mask]
If you also want to vectorize the outer loop one has to do it by broadcasting, that however will use a lot of memory O(n**2) instead of O(n). Given that the critical inner loop is already vectorized there won't be a lot of speedup by vectorizing the outer loop.
Related
I have the following for loop that operates over three numpy arrays of the same length:
n = 100
a = np.random.random(n)
b = np.random.random(n)
c = np.random.random(n)
valid = np.empty(n)
for i in range(n):
valid[i] = np.any(a[i] > b[i:] + c[i:].cumsum())
Is there a way to replace this for loop with some vectorized numpy operations?
For example, because I only care if a[i] is larger than any value in b[i:], I can do np.minimum.accumulate(b[::-1])[::-1] which gets the smallest value of b at every index and onwards, and then compare it to a like this:
a > np.minimum.accumulate(b[::-1])[::-1]
but I still would need a way to vectorize the c[i:].cumsum() into a single array calculation.
Your goal is to find the minimum of b[i:] + c[i:].cumsum() for each i. Clearly you can compare that to a directly.
You can write the elements of c[i:].cumsum() as the upper triangle of a matrix. Let's look at a toy case with n = 3:
c = [c1, c2, c3]
s1 = c.cumsum()
s0 = np.r_[0, s1[:-1]]
You can write the elements of the cumulative sum as
c1, c1 + c2, c1 + c2 + c3 s1[0:] s1[0:] - s0[0]
c2, c2 + c3 = s1[1:] - c1 = s1[1:] - s0[1]
c3 s1[2:] - (c1 + c2) s1[2:] - s0[2]
You can use np.triu_indices to construct these sums as a raveled array:
r, c = np.triu_indices(n)
diff = s1[c] - s0[r] + b[c]
Since np.minimum is a ufunc, you can accumulate diff for each run defined by r using minimum.reduceat. The locations are given roughly by np.flatnonzero(np.diff(r)) + 1, but you can generate them faster with np.arange:
m = np.minimum.reduceat(diff, np.r_[0, np.arange(n, 1, -1).cumsum()])
So finally, you have:
valid = a > m
TL;DR
s1 = c.cumsum()
s0 = np.r_[0, s1[:-1]]
r, c = np.triu_indices(n)
valid = a > np.minimum.reduceat(s1[c] - s0[r] + b[c], np.r_[0, np.arange(n, 1, -1).cumsum()])
I assume you want to vectorize it to decrease the running time. Since you are only using pure NumPy operations, you can use numba: see 5 Minutes Guide to Numba
it will look something like this:
import numba
#numba.njit()
def valid_for_single_idx(idx, a, b, c):
return np.any(a[idx] > b[idx:] + c[idx:].cumsum())
valid = np.empty(n)
for i in range(n):
valid[i] = valid_for_single_idx(i, a, b, c)
So far it isn't really vectorization (as the loop still happens), but it translate the the numpy line into llvm, so it happens as fast as probably possible.
Although it's not increasing the speed, but looks a bit nicer, you can use .map:
import numba
from functools import partial
#numba.njit()
def valid_for_single_idx(idx, a, b, c):
return np.any(a[idx] > b[idx:] + c[idx:].cumsum())
valid = map(partial(valid_for_single_idx, a=a, b=b, c=c), range(n))
I have a performance issue when coding with python.
let's say I have 2 very large arrays (Nx2) of strings say with N = 12,000,000, and two variables label_a and label_b which are also strings. Here is the following code:
import numpy as np
import time
indices = np.array([np.random.choice(np.arange(5000).astype(str),size=10000000),np.random.choice(np.arange(5000).astype(str),size=10000000)]).T
costs = np.random.uniform(size=10000000)
label_a = '2'
label_b = '9'
t0 = time.time()
costs = costs[(indices[:,0]!=label_a)*(indices[:,0]!=label_b)*(indices[:,1]!=label_a)*(indices[:,1]!=label_b)]
indices = indices[(indices[:,0]!=label_a)*(indices[:,0]!=label_b)*(indices[:,1]!=label_a)*(indices[:,1]!=label_b)]
t1 = time.time()
toseq = t1-t0
print(toseq)
the above code segment takes 3 seconds every time it's ran. I would like to achieve the same thing while reducing the computing cost:
I am using a boolean mask to only retrieve rows in the costs and indices arrays where the values are not label_a and label_b
As indicated in the comments, computing the values of the indices you're after only once, and combining them only once would save time.
(I've also changed the way of timing, just for brevity - the results are the same)
import numpy as np
from timeit import timeit
r = 5000
n = 10000000
indices = np.array([
np.random.choice(np.arange(r).astype(str), size=n),
np.random.choice(np.arange(r).astype(str), size=n)
]).T
costs = np.random.uniform(size=n)
label_a = '2'
label_b = '9'
n_indices = np.array([
np.random.choice(np.arange(r), size=n),
np.random.choice(np.arange(r), size=n)
]).T
def run():
global indices
global costs
_ = costs[(indices[:, 0] != label_a)*(indices[:, 0] != label_b) *
(indices[:, 1] != label_a)*(indices[:, 1] != label_b)]
_ = indices[(indices[:, 0] != label_a)*(indices[:, 0] != label_b) *
(indices[:, 1] != label_a)*(indices[:, 1] != label_b)]
def run_faster():
global indices
global costs
# only compute these only once
not_a0 = indices[:, 0] != label_a
not_b0 = indices[:, 0] != label_b
not_a1 = indices[:, 1] != label_a
not_b1 = indices[:, 1] != label_b
_ = costs[not_a0 * not_b0 * not_a1 * not_b1]
_ = indices[not_a0 * not_b0 * not_a1 * not_b1]
def run_even_faster():
global indices
global costs
# also combine them only once
cond = ((indices[:, 0] != label_a) * (indices[:, 0] != label_b) *
(indices[:, 1] != label_a) * (indices[:, 1] != label_b))
_ = costs[cond]
_ = indices[cond]
def run_sep_mask():
global indices
global costs
global cond
# just the masking part of run_even_faster
cond = ((indices[:, 0] != label_a) * (indices[:, 0] != label_b) *
(indices[:, 1] != label_a) * (indices[:, 1] != label_b))
def run_sep_index():
global indices
global costs
global cond
# just the indexing part of run_even_faster
_ = costs[cond]
_ = indices[cond]
def run_even_faster_numerical():
global indices
global costs
# use int values and n_indices instead of indices
a = int(label_a)
b = int(label_b)
cond = ((n_indices[:, 0] != a) * (n_indices[:, 0] != b) *
(n_indices[:, 1] != a) * (n_indices[:, 1] != b))
_ = costs[cond]
_ = indices[cond]
def run_all(funcs):
for f in funcs:
print('{:.4f} : {}()'.format(timeit(f, number=1), f.__name__))
run_all([run, run_faster, run_even_faster, run_sep_mask, run_sep_index, run_even_faster_numerical])
Note that I also added an example where the operation is not based on strings, but on numbers instead. If you can avoid the values being strings, but get numbers instead, you'd get a performance boost as well.
This boost gets substantial if you start comparing longer labels - in the end it might even be worth converting the strings to numbers before the filtering, if the strings get long enough.
These are my results:
0.9711 : run()
0.7065 : run_faster()
0.6983 : run_even_faster()
0.2657 : run_sep_mask()
0.4174 : run_sep_index()
0.4536 : run_even_faster_numerical()
The two sep entries show that the indexing is about twice the amount of time it takes to build the mask for run_even_faster, so you can only expect so much improvement from tuning it even more.
However, they also show that building the mask based on integers is less than 0.04 seconds on top of doing the actual indexing, compared to the about 0.26 seconds for building the mask based on strings. So, that's the room you have for improvement.
Given a large array a which is sorted along each row, is there faster alternative to numpy's np.where to find the indices where min_v <= a <= max_v? I would imagine that leveraging the sorted nature of the array should be able to speed things up.
Here's an example of a setup using np.where to find the given indices in a large array.
import numpy as np
# Initialise an example of an array in which to search
r, c = int(1e2), int(1e6)
a = np.arange(r*c).reshape(r, c)
# Set up search limits
min_v = (r*c/2)-10
max_v = (r*c/2)+10
# Find indices of occurrences
idx = np.where(((a >= min_v) & (a <= max_v)))
You can use np.searchsorted:
import numpy as np
r, c = 10, 100
a = np.arange(r*c).reshape(r, c)
min_v = ((r * c) // 2) - 10
max_v = ((r * c) // 2) + 10
# Old method
idx = np.where(((a >= min_v) & (a <= max_v)))
# With searchsorted
i1 = np.searchsorted(a.ravel(), min_v, 'left')
i2 = np.searchsorted(a.ravel(), max_v, 'right')
idx2 = np.unravel_index(np.arange(i1, i2), a.shape)
print((idx[0] == idx2[0]).all() and (idx[1] == idx2[1]).all())
# True
When I use np.searchsorted with the 100 million numbers in the original example with the not up-to-date NumPy version 1.12.1 (can't tell about newer versions), it is not much faster than np.where:
>>> import timeit
>>> timeit.timeit('np.where(((a >= min_v) & (a <= max_v)))', number=10, globals=globals())
6.685825735330582
>>> timeit.timeit('np.searchsorted(a.ravel(), [min_v, max_v])', number=10, globals=globals())
5.304438766092062
But, despite the NumPy docs for searchsorted say This function uses the same algorithm as the builtin python bisect.bisect_left and bisect.bisect_right functions, the latter are a lot faster:
>>> import bisect
>>> timeit.timeit('bisect.bisect_left(a.base, min_v), bisect.bisect_right(a.base, max_v)', number=10, globals=globals())
0.002058468759059906
Therefore, I'd use this:
idx = np.unravel_index(range(bisect.bisect_left(a.base, min_v),
bisect.bisect_right(a.base, max_v)), a.shape)
I have two numpy arrays like below
a=np.array([11,12])
b=np.array([9])
#a-b should be [2,12]
I want to subtract both a & b such that result should [2,12]. How can I achieve this result?
You can zero-pad one of the array.
import numpy as np
n = max(len(a), len(b))
a_pad = np.pad(a, (0, n - len(a)), 'constant')
b_pad = np.pad(b, (0, n - len(b)), 'constant')
ans = a_pad - b_pad
Here np.pad's second argument is (#of left pads, #of right pads)
A similar method to #BlownhitherMa, would be to create an array of zeros the size of a (we can call it c), then put in b's values where appropriate:
c = np.zeros_like(a)
c[np.indices(b.shape)] = b
>>> c
array([9, 0])
>>> a-c
array([ 2, 12])
You could use zip_longest from itertools:
import numpy as np
from itertools import zip_longest
a = np.array([11, 12])
b = np.array([9])
result = np.array([ai - bi for ai, bi in zip_longest(a, b, fillvalue=0)])
print(result)
Output
[ 2 12]
Here is a very long laid out solution.
diff =[]
n = min(len(a), len(b))
for i in range (n):
diff.append(a[i] - b[i])
if len(a) > n:
for i in range(n,len(a)):
diff.append(a[i])
elif len(b) > n:
for i in range(n,len(b)):
diff.append(b[i])
diff=np.array(diff)
print(diff)
We can avoid unnecessary padding / temporaries by copying a and then subtracting b in-place:
# let numpy determine appropriate dtype
dtp = (a[:0]-b[:0]).dtype
# copy a
d = a.astype(dtp)
# subtract b
d[:b.size] -= b
I have a numpy ndarray that I made using numpy.loadtxt. I want to pull an entire row from it based on a condition in the third column. Something like : if array[2][i] is meeting my conditions, then get array[0][i] and array [1][i] as well. I'm new to python, and all of the numpy features, so I'm looking for the best way to do this. Ideally, I'd like to pull 2 rows at a time, but I wont always have an even number of rows, so I imagine that is a problem
import numpy as np
'''
Created on Jan 27, 2013
#author:
'''
class Volume:
f ='/Users/Documents/workspace/findMinMax/crapc.txt'
m = np.loadtxt(f, unpack=True, usecols=(1,2,3), ndmin = 2)
maxZ = max(m[2])
minZ = min(m[2])
print("Maximum Z value: " + str(maxZ))
print("Minimum Z value: " + str(minZ))
zIncrement = .5
steps = maxZ/zIncrement
currentStep = .5
b = []
for i in m[2]:#here is my problem
while currentStep < steps:
if m[2][i] < currentStep and m[2][i] > currentStep - zIncrement:
b.append(m[2][i])
if len(b) < 2:
currentStep + zIncrement
print(b)
Here is some code that I did in java that is the general idea of what I want:
while( e < a.length - 1){
for(int i = 0; i < a.length - 1; i++){
if(a[i][2] < stepSize && a[i][2] > stepSize - 2){
x.add(a[i][0]);
y.add(a[i][1]);
z.add(a[i][2]);
}
if(x.size() < 1){
stepSize += 1;
}
}
}
First of all, you probably don't want to put your code in that class definition...
import numpy as np
def main():
m = np.random.random((3, 4))
mask = (m[2] > 0.5) & (m[2] < 0.8) # put your conditions here
# instead of 0.5 and 0.8 you can use
# an array if you like
m[:, mask]
if __name__ == '__main__':
main()
mask is a boolean array, m[:, mask] is the array you want
m[2] is the third row of m. If you type m[2] + 2 you get a new array with the old values + 2. m[2] > 0.5 creates an array with boolean values. It is best to try this stuff out with ipython (www.ipython.org)
In the expression m[:, mask] the : means "take all rows", mask describes which columns should be included.
Update
Next try :-)
for i in range(0, len(m), 2):
two_rows = m[i:i+2]
If you can write your condition as a simple function
def condition(value):
# return True or False depending on value
then you could select your subarrays like this:
cond = condition(a[2])
subarray0 = a[0,cond]
subarray1 = a[1,cond]