Faster method to determine subset of rows of matrix containing particular entry - python

I am looking for a time-efficient solution to the below problem that exploits the fact that I want to perform a certain operation many times over. I have two methods implemented below, and I observe that one of them is significantly faster. I am wondering if there is a more efficient alternative to both methods.
Input: Matrix mat of dimension m*n populated with nonnegative integers (0 <= each integer <= b). Also given p nonnegative integers q1, q2, ..., qp (each <= b) and vectors v1, v2, ..., vp. Each entry of vj contains d row indices of mat.
I am interested in cases where m, p, and d are large (~106), n is small (~10), and b is small (~100).
Output: For each pair (vj,qj), return the sub-list of rows of mat among vj[0], vj[1], ..., vj[d-1] that contain the integer qj.
My approach: Because p can be large, I preprocessed mat to determine if each row contains any of the numbers between 0 and b. Then, I went through the vectors vj to determine if the rows of mat defined by their entries contained qj. I tried two different approaches to storing whether each row of mat contains any integer between 0 and b. To my surprise, I found that Method 1 performs significantly faster than Method 2.
Question: I am wondering if there is a better (practical) way to preprocess mat so that the operations for each pair (vj,qj) are as fast as possible.
Edit: Defining a tmp variable as tmp = isPresent[qs[j]] and iterating through the elements of tmp yielded a faster solution, but I'm hoping I can do something even faster.
Note: Ordering of elements in result is not important.
# Python code
import random
import numpy
import time
m = 1000000 # number of rows of mat
n = 10 # number of columns of mat
b = 255 # upper bound on entries of mat
d = 10000 # dimension of vec (containing row indices of mat)
p = 100 # number of vecs
# random specification of mat
# mat, vec, and q will be inputs from another part of the project
mat = []
for i in range(m):
tmp = (numpy.random.permutation(b+1))[0:n]
mat.append(tmp)
# random specification of vec and q
vecs = []
qs = []
for i in range(p):
qs.append(random.randrange(0,b+1,1))
vecs.append((numpy.random.permutation(m))[0:d])
# METHOD 1
# store the rows where each integer occurs
# not too worried about time taken by this step
isPresent = [[False]*m for i in range(b+1)]
for i in range(m):
for j in mat[i]:
isPresent[j][i] = True
# mainly care about reducing time from hereon
time1 = 0.0
for j in range(p):
st1 = time.time()
result1 = []
for i in vecs[j]:
if isPresent[qs[j]][i]:
result1.append(i)
time1 += time.time() - st1
# METHOD 2
# store the rows where each integer occurs
# not too worried about time taken by this step
isPresent = [[False]*(b+1) for i in range(m)]
for i in range(m):
for j in mat[i]:
isPresent[i][j] = True
# mainly care about reducing time from hereon
time2 = 0.0
for j in range(p):
st2 = time.time()
result2 = []
for i in vecs[j]:
if isPresent[i][qs[j]]:
result2.append(i)
time2 += time.time() - st2
print('time1: ',time1,' time2: ',time2)
Note: I observe time1 = 0.46 seconds and time2 = 0.69 seconds on my laptop

TL;DR: Yes, there is a much better way to compute that using numpy. However, please note that there is a 2D random memory indirection pattern which is generally slow and known to be difficult to optimize.
Useful information:
Random memory accesses are slow. Indeed, it is difficult for the processor to predict memory address to fetch and thus reduce the latency of the memory. This is not too bad as long as data fit in the caches and are reused several times. Random memory accesses done over a huge memory area are much slower and should be avoided like the plague (when it is possible).
Analysis:
Both methods do a random memory indirections when executing the expressions isPresent[qs[j]][i] and isPresent[i][qs[j]].
Such indirections are slow. But the method 2 is slower since the average distance between fetched address tends to be much bigger than the method 1 causing an effect called cache thrashing.
Faster solution: Numpy can be used to strongly increase the performance of the first method (thanks to "vectorized" native methods).
Indeed, this method uses plain python loops that are generally very slow and recomputes isPresent[qs[j]] several times.
Here is the faster implementation:
# Assume vecs is a list of np.arrray rather than a list of list
isPresent = [numpy.array([False]*m) for i in range(b+1)]
for i in range(m):
for j in mat[i]:
isPresent[j][i] = True
time3 = 0.0
for j in range(p):
st3 = time.time()
tmp = isPresent[qs[j]]
result3 = numpy.extract(tmp[vecs[j]], vecs[j])
time3 += time.time() - st3
Performance results:
time1: 0.165357
time2: 0.309095
time3: 0.007201
The new version is 23 times faster than the first method and 43 times faster than the second.
Note that one can do this significantly faster by computing the j-loop in parallel, but this is a bit more complex.

Related

The most efficient way rather than using np.setdiff1d and np.in1d, to remove common values of 1D arrays with unique values

I need a much faster code to remove values of an 1D array (array length ~ 10-15) that are common with another 1D array (array length ~ 1e5-5e5 --> rarely up to 7e5), which are index arrays contain integers. There is no duplicate in the arrays, and they are not sorted and the order of the values must be kept in the main array after modification. I know that can be achieved using such np.setdiff1d or np.in1d (which both are not supported for numba jitted in no-python mode), and other similar posts (e.g. this) have not much more efficient way to do so, but performance is important here because all the values in the main index array will be gradually be removed in loops.
import numpy as np
import numba as nb
n = 500000
r = 10
arr1 = np.random.permutation(n)
arr2 = np.random.randint(0, n, r)
# #nb.jit
def setdif1d_np(a, b):
return np.setdiff1d(a, b, assume_unique=True)
# #nb.jit
def setdif1d_in1d_np(a, b):
return a[~np.in1d(a, b)]
There is another related post that proposed by norok2 for 2D arrays, that is ~15 times faster solution (hashing-like way using numba) than usual methods described there. This solution may be the best if it could be prepared for 1D arrays:
#nb.njit
def mul_xor_hash(arr, init=65537, k=37):
result = init
for x in arr.view(np.uint64):
result = (result * k) ^ x
return result
#nb.njit
def setdiff2d_nb(arr1, arr2):
# : build `delta` set using hashes
delta = {mul_xor_hash(arr2[0])}
for i in range(1, arr2.shape[0]):
delta.add(mul_xor_hash(arr2[i]))
# : compute the size of the result
n = 0
for i in range(arr1.shape[0]):
if mul_xor_hash(arr1[i]) not in delta:
n += 1
# : build the result
result = np.empty((n, arr1.shape[-1]), dtype=arr1.dtype)
j = 0
for i in range(arr1.shape[0]):
if mul_xor_hash(arr1[i]) not in delta:
result[j] = arr1[i]
j += 1
return result
I tried to prepare that for 1D arrays, but I have some problems/question with that.
At first, IDU what does mul_xor_hash exactly do, and if init and k are arbitrary selected or not
Why mul_xor_hash will not work without nb.njit:
File "C:/Users/Ali/Desktop/test - Copy - Copy.py", line 21, in mul_xor_hash
result = (result * k) ^ x
TypeError: ufunc 'bitwise_xor' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
IDK how to implement mul_xor_hash on 1D arrays (if it could), which I guess may make it faster more than for 2Ds, so I broadcast the input arrays to 2D by [None, :], which get the following error just for arr2:
print(mul_xor_hash(arr2[0]))
ValueError: new type not compatible with array
and what does delta do
I am searching the most efficient way in this regard. In the absence of better method than norok2 solution, how to prepare this solution for 1D arrays?
Understanding the hash-based solution
At first, IDU what does mul_xor_hash exactly do, and if init and k are arbitrary selected or not
mul_xor_hash is a custom hash function. Functions mixing xor and multiply (possibly with shifts) are known to be relatively fast to compute the hash of a raw data buffer. The multiplication tends to shuffle bits and the xor is used to somehow combine/accumulate the result in a fixed size small value (ie. the final hash). There are many different hashing functions. Some are faster than others, some cause more collisions than other in a given context. A fast hashing function causing too many collisions can be useless in practice as it would result in a pathological situation where all conflicting values needs to be compared. This is why fast hash functions are hard to implement.
init and k are parameter certainly causing the hash to be pretty balance. This is pretty common in such a hash function. k needs to be sufficiently big for the multiplication to shuffle bits and it should typically also be a prime number (values like power of two tends to increase collisions due to modular arithmetic behaviours). init plays a significant role only for very small arrays (eg. with 1 item): it helps to reduce collisions by xoring the final hash by a non-trivial constant. Indeed, if arr.size = 1, then result = (init * k) ^ arr[0] where init * k is a constant. Having an identity hash function equal to arr[0] is known to be bad since it tends to result in many collisions (this is a complex topic, but put it shortly, arr[0] can be divided by the number of buckets in the hash table for example). Thus, init should be a relatively big number and init * k should also be a big non-trivial value (a prime number is a good target value).
Why mul_xor_hash will not work without nb.njit
It depends of the input. The input needs to be a 1D array and have a raw size in byte divisible by 8 (eg. 64-bit items, 2n x 32-bit ones, 4n x 16-bit one or 8n 8-bit ones). Here is some examples:
mul_xor_hash(np.random.rand(10))
mul_xor_hash(np.arange(10)) # Do not work with 9
and what does delta do
It is a set containing the hash of the arr2 row so to find matching lines faster than comparing them without hashes.
how to prepare this solution for 1D arrays?
AFAIK, hashes are only use to avoid comparisons of rows but this is because the input is the 2D array. In 1D, there is no such a problem.
There is big catch with this method: it only works if there is no hash collisions. Otherwise, the implementation wrongly assumes that values are equal even if they are not! #norok explicitly mentioned it in the comments though:
Note that the collision handling for the hashings should also be implemented
Faster implementation
Using the 2D solution of #norok2 for 1D is not a good idea since hashes will not make it faster the way they are used. In fact, a set already use a hash function internally anyway. Not to mention collisions needs to be properly implemented (which is done by a set).
Using a set is a relatively good idea since it causes the complexity to be O(n + m) where n = len(arr1) and m = len(arr2). That being said, if arr1 is converted to a set, then it will be too big to fit in L1 cache (due to the size of arr1 in your case) resulting in slow cache misses. Additionally, the growing size of the set will cause values to be re-hashed which is not efficient. If arr2 is converted to a set, then the many hash table fetches will not be very efficient since arr2 is very small in your case. This is why this solution is sub-optimal.
One solution is to split arr1 in chunks and then build a set based on the target chunk. You can then check if a value is in the set or not efficiently. Building the set is still not very efficient due to the growing size. This problem is due to Python itself which do not provide a way to reserve some space for the data structure like other languages do (eg. C++). One solution to avoid this issue is simply to reimplement an hash-table which is not trivial and cumbersome. Actually, Bloom filters can be used to speed up this process since they can quickly find if there is no collision between the two sets arr1 and arr2 in average (though they are not trivial to implement).
Another optimization is to use multiple threads to compute the chunks in parallel since they are independent. That being said, the appending to the final array is not easy to do efficiently in parallel, especially since you do not want the order to be modified. One solution is to move away the copy from the parallel loop and do it serially but this is slow and AFAIK there is no simple way to do that in Numba currently (since the parallelism layer is very limited). Consider using native languages like C/C++ for an efficient parallel implementation.
In the end, hashing can be pretty complex and the speed up can be quite small compared to a naive implementation with two nested loops since arr2 only have few items and modern processors can compare values quickly using SIMD instructions (while hash-based method can hardly benefit from them on mainstream processors). Unrolling can help to write a pretty simple and fast implementation. Again, unfortunately, Numba use LLVM-Jit internally which appear to fail to vectorize such a simple code (certainly due to missing optimizations in either LLVM-Jit or even LLVM itself). As a result, the non vectorized code is finally a bit slower (rather than 4~10 times faster on a modern mainstream processor). One solution is to use a C/C++ code instead to do that (or possibly Cython).
Here is a serial implementation using basic Bloom filters:
#nb.njit('uint32(int32)')
def hash_32bit_4k(value):
return (np.uint32(value) * np.uint32(27_644_437)) & np.uint32(0x0FFF)
#nb.njit(['int32[:](int32[:], int32[:])', 'int32[:](int32[::1], int32[::1])'])
def setdiff1d_nb_faster(arr1, arr2):
out = np.empty_like(arr1)
bloomFilter = np.zeros(4096, dtype=np.uint8)
for j in range(arr2.size):
bloomFilter[hash_32bit_4k(arr2[j])] = True
cur = 0
for i in range(arr1.size):
# If the bloom-filter value is true, we know arr1[i] is not in arr2.
# Otherwise, there is maybe a false positive (conflict) and we need to check to be sure.
if bloomFilter[hash_32bit_4k(arr1[i])] and arr1[i] in arr2:
continue
out[cur] = arr1[i]
cur += 1
return out[:cur]
Here is an untested variant that should work for 64-bit integers (floating point numbers need memory views and possibly a prime constant too):
#nb.njit('uint64(int64)')
def hash_32bit_4k(value):
return (np.uint64(value) * np.uint64(67_280_421_310_721)) & np.uint64(0x0FFF)
Note that if all the values in the small array are contained in the main array in each loop, then we can speed up the arr1[i] in arr2 part by removing values from arr2 when we find them. That being said, collisions and findings should be very rare so I do not expect this to be significantly faster (not to mention it adds some overhead and complexity). If items are computed in chunks, then the last chunks can be directly copied without any check but the benefit should still be relatively small. Note that this strategy can be effective for the naive (C/C++) SIMD implementation previously mentioned though (it can be about 2x faster).
Generalization and parallel implementation
This section focus on the algorithm to use regarding the input size. It particularly details an SIMD-based implementation and discuss about the use of multiple threads.
First of all, regarding the value r, the best algorithm to use can be different. More specifically:
when r is 0, the best thing to do is to return the input array arr1 unmodified (possibly a copy to avoid issue with in-place algorithms);
when r is 1, we can use one basic loop iterating over the array, but the best implementation is likely to use np.where of Numpy which is highly optimized for that
when r is small like <10, then using a SIMD-based implementation should be particularly efficient, especially if the iteration range of the arr2-based loop is known at compile-time and is unrolled
for bigger r values that are still relatively small (eg. r < 1000 and r << n), the provided hash-based solution should be one of the best;
for larger r values with r << n, the hash-based solution can be optimized by packing boolean values as bits in bloomFilter and by using multiple hash-functions instead of one so to better handle collisions while being more cache-friendly (in fact, this is what actual bloom filters does); note that multi-threading can be used so speed up the lookups when r is huge and r << n;
when r is big and not much smaller than n, then the problem is pretty hard to solve efficiently and the best solution is certainly to sort both arrays (typically with a radix sort) and use a merge-based method to remove the duplicates, possibly with multiple threads when both r and n are huge (hard to implement).
Let's start with the SIMD-based solution. Here is an implementation:
#nb.njit('int32[:](int32[::1], int32[::1])')
def setdiff1d_nb_simd(arr1, arr2):
out = np.empty_like(arr1)
limit = arr1.size // 4 * 4
limit2 = arr2.size // 2 * 2
cur = 0
z32 = np.int32(0)
# Tile (x4) based computation
for i in range(0, limit, 4):
f0, f1, f2, f3 = z32, z32, z32, z32
v0, v1, v2, v3 = arr1[i], arr1[i+1], arr1[i+2], arr1[i+3]
# Unrolled (x2) loop searching for a match in `arr2`
for j in range(0, limit2, 2):
val1 = arr2[j]
val2 = arr2[j+1]
f0 += (v0 == val1) + (v0 == val2)
f1 += (v1 == val1) + (v1 == val2)
f2 += (v2 == val1) + (v2 == val2)
f3 += (v3 == val1) + (v3 == val2)
# Remainder of the previous loop
if limit2 != arr2.size:
val = arr2[arr2.size-1]
f0 += v0 == val
f1 += v1 == val
f2 += v2 == val
f3 += v3 == val
if f0 == 0: out[cur] = arr1[i+0]; cur += 1
if f1 == 0: out[cur] = arr1[i+1]; cur += 1
if f2 == 0: out[cur] = arr1[i+2]; cur += 1
if f3 == 0: out[cur] = arr1[i+3]; cur += 1
# Remainder
for i in range(limit, arr1.size):
if arr1[i] not in arr2:
out[cur] = arr1[i]
cur += 1
return out[:cur]
It turns out this implementation is always slower than the hash-based one on my machine since Numba clearly generate an inefficient for the inner arr2-based loop and this appears to come from broken optimizations related to the ==: Numba simply fail use SIMD instructions for this operation (for no apparent reasons). This prevent many alternative SIMD-related codes to be fast as long as they are using Numba.
Another issue with Numba is that np.where is slow since it use a naive implementation while the one of Numpy has been heavily optimized. The optimization done in Numpy can hardly be applied to the Numba implementation due to the previous issue. This prevent any speed up using np.where in a Numba code.
In practice, the hash-based implementation is pretty fast and the copy takes a significant time on my machine already. The computing part can be speed up using multiple thread. This is not easy since the parallelism model of Numba is very limited. The copy cannot be easily optimized with Numba (one can use non-temporal store but this is not yet supported by Numba) unless the computation is possibly done in-place.
To use multiple threads, one strategy is to first split the range in chunk and then:
build a boolean array determining, for each item of arr1, whether the item is found in arr2 or not (fully parallel)
count the number of item found by chunk (fully parallel)
compute the offset of the destination chunk (hard to parallelize, especially with Numba, but fast thanks to chunks)
copy the chunk to the target location without copying found items (fully parallel)
Here is an efficient parallel hash-based implementation:
#nb.njit('int32[:](int32[:], int32[:])', parallel=True)
def setdiff1d_nb_faster_par(arr1, arr2):
# Pre-computation of the bloom-filter
bloomFilter = np.zeros(4096, dtype=np.uint8)
for j in range(arr2.size):
bloomFilter[hash_32bit_4k(arr2[j])] = True
chunkSize = 1024 # To tune regarding the kind of input
chunkCount = (arr1.size + chunkSize - 1) // chunkSize
# Find for each item of `arr1` if the value is in `arr2` (parallel)
# and count the number of item found for each chunk on the fly.
# Note: thanks to page fault, big parts of `found` are not even written in memory if `arr2` is small
found = np.zeros(arr1.size, dtype=nb.bool_)
foundCountByChunk = np.empty(chunkCount, dtype=nb.uint16)
for i in nb.prange(chunkCount):
start, end = i * chunkSize, min((i + 1) * chunkSize, arr1.size)
foundCountInChunk = 0
for j in range(start, end):
val = arr1[j]
if bloomFilter[hash_32bit_4k(val)] and val in arr2:
found[j] = True
foundCountInChunk += 1
foundCountByChunk[i] = foundCountInChunk
# Compute the location of the destination chunks (sequential)
outChunkOffsets = np.empty(chunkCount, dtype=nb.uint32)
foundCount = 0
for i in range(chunkCount):
outChunkOffsets[i] = i * chunkSize - foundCount
foundCount += foundCountByChunk[i]
# Parallel chunk-based copy
out = np.empty(arr1.size-foundCount, dtype=arr1.dtype)
for i in nb.prange(chunkCount):
srcStart, srcEnd = i * chunkSize, min((i + 1) * chunkSize, arr1.size)
cur = outChunkOffsets[i]
# Optimization: we can copy the whole chunk if there is nothing found in it
if foundCountByChunk[i] == 0:
out[cur:cur+(srcEnd-srcStart)] = arr1[srcStart:srcEnd]
else:
for j in range(srcStart, srcEnd):
if not found[j]:
out[cur] = arr1[j]
cur += 1
return out
This implementation is the fastest for the target input on my machine. It is generally fast when n is quite big and the overhead to create threads is relatively small on the target platform (eg. on PCs but typically not computing servers with many cores). The overhead of the parallel implementation is significant so the number of core on the target machine needs to be at least 4 so the implementation can be significantly faster than the sequential implementation.
It may be useful to tune the chunkSize variable for the target inputs. If r << n, it is better to use a pretty big chunkSize. That being said, the number of chunk needs to be sufficiently big for multiple thread to operate on many chunks. Thus, chunkSize should be significantly smaller than n / numberOfThreads.
On my machine most of the time (65-70%) is spent in the final copy which is mostly memory-bound and can hardly be optimized further with Numba.
Results
Here are results on my i5-9600KF-based machine (with 6 cores):
setdif1d_np: 2.65 ms
setdif1d_in1d_np: 2.61 ms
setdiff1d_nb: 2.33 ms
setdiff1d_nb_simd: 1.85 ms
setdiff1d_nb_faster: 0.73 ms
setdiff1d_nb_faster_par: 0.49 ms
The best provided implementation is about 4~5 time faster than the other ones.
What I found is that hashing does not help,. It is just trick for 2D case, to convert 1d arrays to single numbers and put them as such in a set.
Below is method of norok2 I converted to 1d arrays (and added annotations for faster compilation).
Note that this is only slightly (20-30%) faster than the methods you already have. And of course after second function call, on first due to compilation it is slightly slower.
#nb.njit('int32[:](int32[:], int32[:])')
def setdiff1d_nb(arr1, arr2):
delta = set(arr2)
# : build the result
result = np.empty(len(arr1), dtype=arr1.dtype)
j = 0
for i in range(arr1.shape[0]):
if arr1[i] not in delta:
result[j] = arr1[i]
j += 1
return result[:j]

Why is this numba code to store and process pointcloud data slower than the pure Python version?

I need to store some data structure like that:
{'x1,y1,z1': [[p11_x,p11_y,p11_z], [p12_x,p12_y,p12_z], ..., [p1n_x,p1n_y,p1n_z]],
'x2,y2,z2': [[p21_x,p21_y,p21_z], [p22_x,p22_y,p22_z], ..., [p2n_x,p2n_y,p2n_z]],
...
'xn,yn,zn': [[pn1_x,pn1_y,pn1_z], [pn2_x,pn2_y,pn2_z], ..., [pnm_x,pnm_y,pnm_z]]}
Every key is a grid cell index and the value is a list of classified points. The list can be variable length but I can set it static, for example 1000 elements.
For now I tried something like this:
np.zeros(shape=(100,100,100,50,3))
But if I use numba.jit with that function the execution time is few times worse than with pure Python.
Simple Python example of what I want to do:
def split_into_grid_py(points: np.array):
grid = {}
for point in points:
r_x = round(point[0])
r_y = round(point[1])
r_z = round(point[2])
try:
grid[(r_x, r_y, r_z)].append(point)
except KeyError:
grid[(r_x, r_y, r_z)] = [point]
return grid
Is there any efficient way of doing that with numba?
Per 10 execution in loop times are like:
numba: 7.050494909286499
pure Python: 1.0014197826385498
With the same data set, so it's crap optimization.
My numba code:
#numba.jit(nopython=True)
def split_into_grid(points: np.array):
grid = np.zeros(shape=(100,100,100,50,3))
for point in points:
r_x = round(point[0])
r_y = round(point[1])
r_z = round(point[2])
i = 0
for cell in grid[r_x][r_y][r_z]:
if not np.sum(cell):
grid[r_x][r_y][r_z][i] = point
break
i += 1
return grid
The pure Python version append items in O(1) time thanks to the dictionary container while the Numba version use a O(n) array search (bounded by 50). Moreover, np.zeros(shape=(100,100,100,50,3)) allocate an array of about 1 GiB which resulting in many cache misses during the computation which will be done in RAM. Meanwhile, the pure Python version may fit in the CPU caches. There are two strategies to solve that.
The first strategy is to use 3 containers. An array keyGrid mapping each grid cell to an offset in a second array valueGrid or -1 if there is no point associated to this cell. valueGrid contains all the points for a given grid cell. Finally, countingGrid count the number of points per grid cell. Here is an untested example:
#numba.jit(nopython=True)
def split_into_grid(points: np.array):
# Note: use np.uint16 if the actual number of filled grid cell is less than 65536
keyGrid = np.full(shape=(100,100,100), -1, dtype=np.uint32)
i = 0
for point in points:
r_x = round(point[0])
r_y = round(point[1])
r_z = round(point[2])
if keyGrid[r_x,r_y,r_z] < 0:
keyGrid[r_x,r_y,r_z] = i
i += 1
uniqueCloundPointCount = i
# Note the number of points per grid cell is also bounded by the type
countingGrid = np.zeros(uniqueCloundPointCount, dtype=np.uint8)
valueGrid = np.full((uniqueCloundPointCount, 50, 3), -1, dtype=np.int32)
for point in points:
r_x = round(point[0])
r_y = round(point[1])
r_z = round(point[2])
key = keyGrid[r_x,r_y,r_z]
addingPos = countingGrid[key]
valueGrid[key, addingPos] = point
countingGrid[key] += 1
return (keyGrid, valueGrid, countingGrid)
Note that the arrays are quite small as long as not all grid cells contains points resulting in less cache misses. Moreover the mapping of each point is done in (small) constant time resulting in a much faster code.
The second strategy is to use the same method than in the pure Python implementation, but with Numba types. Indeed, Numba experimentally supports dictionaries. You can replace the exception with a dictionary check ((r_x, r_y, r_z) in grid) which will cause less compilation issues and likely speed up resulting code. Note that Numba dict are often as fast as CPython ones (if not slower). So the resulting code may not be much much faster.

Find Top-K Smallest Values So Far in Data Stream

Let's say that I have a data stream where single data point is retrieved at a time:
import numpy as np
def next_data_point():
"""
Mock a data stream. Data points will always be a positive float
"""
return np.random.uniform(0, 1_000_000, dtype='float')
I need to be able to update a NumPy array and track the top-K smallest-values-so-far from this stream (or until the user decides when it is okay to stop the analysis via some check_stop_condition() function). Let's say we want to capture the top 1,000 smallest values from the stream, then a naive way to accomplish this might be:
k = 1000
topk = np.full(k, fille_value=np.inf, dtype='float')
while check_stop_condition():
topk[:] = np.sort(np.append(topk, next_data_point()))[:k]
This works fine but is quite inefficient and can be slow if repeated millions of times since we are:
creating a new array every time
sorting the concatenated array every time
So, I came up with a different approach to address these 2 inefficiencies:
k = 1000
topk = np.full(k, fille_value=np.inf)
while check_stop_condition():
data_point = next_data_point()
idx = np.searchsorted(topk, data_point)
if idx < k:
topk[idx : -1] = topk[idx + 1 :]
topk[idx] = data_point
Here, I leverage np.searchsorted() to replace np.sort and to quickly find the insertion point, idx, for the next data point. I believe that np.searchsorted uses some sort of binary search and assumes that the initial array is pre-sorted first. Then, we shift the data in topk to accommodate and insert the new data point if and only if idx < k.
I haven't seen this being done anywhere and so my question is if there is anything that can be done to make this even more efficient? Especially in the way that I shifting things around inside the if statement.
Sorting a huge array is very expensive so this is not surprising the second method is faster. However, the speed of the second method is probably bounded by the slow array copy. The complexity of the first method is O(k log(k) n) while the second method has a complexity of O(n (log(k) + k * p)), with n the number of points and p the probability of the branch to be taken.
To build a faster implementation, you can use a tree. More specifically a self-balancing binary search tree for example. Here is the algorithm:
topk = Tree()
maxi = np.inf
while check_stop_condition(): # O(n)
data_point = next_data_point()
if len(topk) <= 1000: # O(1)
topk.insert(data_point) # O(log k)
elif data_point < maxi: # Discard the value in O(1)
topk.insert(data_point) # O(log k)
topk.deleteMaxNode() # O(log k)
maxi = topk.findMaxValue() # O(log k)
The above algorithm run in O(n log k). One can show that this complexity is optimal (using only data_point comparisons).
In practice, binary heaps can be a bit faster (with the same complexity). Indeed, they have several advantage over self-balancing binary search trees in this case:
they can be implemented in a very compact way in memory (reducing cache misses and memory consumption)
insertion of the n=1000 first items can be done in O(n) time and very quickly
Note that discarded values are computed in constant time. This appends a lot on huge random datasets as most of the values get quickly bigger than maxi. On can even prove that random datasets can be computed in O(n) time (optimal).
Note that Python 3 provides a standard heap implementation called heapq which is probably a good starting point.

Updating nested list takes too long in python

I am trying to implement brown clustering algorithm in python.
I have data structure of cluster = List[List]
At any gives time the outside list length will be maximum 40 or 41.
But internal list contains english words such as 'the', 'hello' etc
So I have total of words 8000(vocabulary) and initially first 40 words are put into cluster.
I iterate over my vocabulary from 41 to 8000
# do some compution this takes very less times.
# Merge 2 item in list and delete one item from list
# ex: if c1 and c2 are items of clusters then
for i in range(41, 8000):
clusters.append(vocabulary[i])
c1 = computation 1
c2 = computation 2
clusters[c1] = clusters[c1] + clusters[c2]
del clusters[c2]
But the time takes for line clusters[c1] = clusters[c1] + clusters[c1] grows gradually as i iterate over my vocabulary. Initially for 41-50 is it 1sec, but for every 20 items in vocabulary the time grows by 1 sec.
On commenting just clusters[c1] = clusters[c1] + clusters[c1] from my entire code, i observer all iterations takes constant time. I am not sure how can i speed up this process.
for i in range(41, 8000):
clusters.append(vocabulary[i])
c1 = computation 1
c2 = computation 2
#clusters[c1] = clusters[c1] + clusters[c2]
del clusters[c2]
I am new to stackoverflow, please excuse me if any incorrect formatting here.
Thanks
The problem you're running into is that list concatenation is a linear time operation. Thus, your entire loop is O(n^2) (and that's prohibitively slow for n much larger than 1000). This is ignoring how copying such large lists can be bad for cache performance, etc.
Disjoint Set data structure
The solution I recommend is to use a disjoint set datastructure. This is a tree-based datastructure that "self-flattens" as you perform queries, resulting in a very fast runtimes for "merging" clusters.
The basic idea is that each word starts off as its own "singleton" tree, and merging clusters consists of making the root of one tree the child of another. This repeats (with some care for balancing) until you have as many clusters as desired.
I've written an example implementation (GitHub link) that assumes elements of each set are numbers. As long as you have a mapping from vocabulary terms to integers, it should work just fine for your purposes. (Note: I've done some preliminary testing, but I wrote it in 5 minutes right now so I'd recommend checking my work. ;) )
To use in your code, I would do something like the following:
clusters = DisjointSet(8000)
# some code to merge the first 40 words into clusters
for i in range(41, 8000):
c1 = some_computation() # assuming c1 is a number
c2 = some_computation() # assuming c2 is a number
clusters.join(c1, c2)
# Now, if you want to determine if some word with number k is
# in the same cluster as a word with number j:
print("{} and {} are in the same cluster? {}".format(j, k, clusters.query(j, k))
Regarding Sets vs Lists
While sets provide faster access time than lists, they actually have worse runtime when copying. This makes sense in theory, because a set object actually has to allocate and assign more memory space than a list for an appropriate load factor. Also, it is likely inserting so many items could result in a "rehash" of the entire hash table, which is a quadratic-time operation in worst-case.
However, practice is what we're concerned with now, so I ran a quick experiment to determine exactly how worse off sets were than lists.
Code for performing this test, in case anyone was interested, is below. I'm using the Intel packaging of Python, so my performance may be slightly faster than on your machine.
import time
import random
import numpy as np
import matplotlib.pyplot as plt
data = []
for trial in range(5):
trial_data = []
for N in range(0, 20000, 50):
l1 = random.sample(range(1000000), N)
l2 = random.sample(range(1000000), N)
s1 = set(l1)
s2 = set(l2)
# Time to concatenate two lists of length N
start_lst = time.clock()
l3 = l1+l2
stop_lst = time.clock()
# Time to union two sets of length N
start_set = time.clock()
s3 = s1|s2
stop_set = time.clock()
trial_data.append([N, stop_lst - start_lst, stop_set - start_set])
data.append(trial_data)
# average the trials and plot
data_array = np.array(data)
avg_data = np.average(data_array, 0)
fig = plt.figure()
ax = plt.gca()
ax.plot(avg_data[:,0], avg_data[:,1], label='Lists')
ax.plot(avg_data[:,0], avg_data[:,2], label='Sets')
ax.set_xlabel('Length of set or list (N)')
ax.set_ylabel('Seconds to union or concat (s)')
plt.legend(loc=2)
plt.show()

How to Parallelized scipy cosine similarity calculation

I have generate a large data frame by reading large number of files in a directory. I have managed to parallelize that section that read files in parse. I take that data and generate the data frame for the next step. Which is calculating similarity matrix.
Now and I am trying to calculate the cosine similarity between rows of the data frame. Since its a large data frame it takes long time (hours) to run. How can I parallelize this process.
Here is my current code of calculating cosine similarity which runs on the single thread:
df = df.fillna(0)
data = df.values
m, k = data.shape
mat = np.zeros((m, m))
"""
scipy cosine similarity is between 0-2 instead of -1 to 1
in that case 1 is 0 and 2 is -1
"""
for i in xrange(m):
for j in xrange(m):
if i != j:
mat[i][j] = 1 - cosine(data[i,:], data[j,:])
else:
mat[i][j] = 1. # 0 if we don't do 1-cosine()
First, I'm assuming your cosine is scipy.spatial.distance.cosine, whose key calculation is:
dist = 1.0 - np.dot(u, v) / (norm(u) * norm(v))
So it looks like I can replace your double loop with:
data1 = data/np.linalg.norm(data,axis=1)[:,None]
mat1 = np.einsum('ik,jk->ij', data1, data1)
That is, normalize data once at the start, rather than at each node. And then use einsum to calculate the whole set of dot products.
For a small test case (m,k=4,3), this is 25x faster than your double loop.
Cautions: I've only tested against your answer for one small data array.
scipy.spactial.distance.norm and cosine have some checks that I haven't implemented.
einsum, while fast for this sort of thing on modest size arrays, can get bogged down with larger ones, and will run into memory errors before your element by element dot. And the underlying dot library may be better tuned to handle multi-core machines.
But even if data is too large to handle with one call to einsum, you could break the calculation into blocks, e.g.
mat[n1:n2,m1:m2] = np.einsum('ik,jk->ij', data1[n1:n2,:], data1[m1:m2,:])
I'd like to point you in the direction of https://docs.python.org/2/library/multiprocessing.html
Take note of pool.map(function, iterable)
Then build the set of triangular position tuples, write the appropriate function and fire away.

Categories

Resources