Fast way to select from numpy array without intermediate index array - python

Given the following 2-column array, I want to select items from the second column that correspond to "edges" in the first column. This is just an example, as in reality my a has potentially millions of rows. So, ideally I'd like to do this as fast as possible, and without creating intermediate results.
import numpy as np
a = np.array([[1,4],[1,2],[1,3],[2,6],[2,1],[2,8],[2,3],[2,1],
[3,6],[3,7],[5,4],[5,9],[5,1],[5,3],[5,2],[8,2],
[8,6],[8,8]])
i.e. I want to find the result,
desired = np.array([4,6,6,4,2])
which is entries in a[:,1] corresponding to where a[:,0] changes.
One solution is,
b = a[(a[1:,0]-a[:-1,0]).nonzero()[0]+1, 1]
which gives np.array([6,6,4,2]), I could simply prepend the first item, no problem. However, this creates an intermediate array of the indexes of the first items. I could avoid the intermediate by using a list comprehension:
c = [a[i+1,1] for i,(x,y) in enumerate(zip(a[1:,0],a[:-1,0])) if x!=y]
This also gives [6,6,4,2]. Assuming a generator-based zip (true in Python 3), this doesn't need to create an intermediate representation and should be very memory efficient. However, the inner loop is not numpy, and it necessitates generating a list which must be subsequently turned back into a numpy array.
Can you come up with a numpy-only version with the memory efficiency of c but the speed efficiency of b? Ideally only one pass over a is needed.
(Note that measuring the speed won't help much here, unless a is very big, so I wouldn't bother with benchmarking this, I just want something that is theoretically fast and memory efficient. For example, you can assume rows in a are streamed from a file and are slow to access -- another reason to avoid the b solution, as it requires a second random-access pass over a.)
Edit: a way to generate a large a matrix for testing:
from itertools import repeat
N, M = 100000, 100
a = np.array(zip([x for y in zip(*repeat(np.arange(N),M)) for x in y ], np.random.random(N*M)))

I am afraid if you are looking to do this in a vectorized way, you can't avoid an intermediate array, as there's no built-in for it.
Now, let's look for vectorized approaches other than nonzero(), which might be more performant. Going by the same idea of performing differentiation as with the original code of (a[1:,0]-a[:-1,0]), we can use boolean indexing after looking for non-zero differentiations that correspond to "edges" or shifts.
Thus, we would have a vectorized approach like so -
a[np.append(True,np.diff(a[:,0])!=0),1]
Runtime test
The original solution a[(a[1:,0]-a[:-1,0]).nonzero()[0]+1,1] would skip the first row. But, let's just say for the sake of timing purposes, it's a valid result. Here's the runtimes with it against the proposed solution in this post -
In [118]: from itertools import repeat
...: N, M = 100000, 2
...: a = np.array(zip([x for y in zip(*repeat(np.arange(N),M))\
for x in y ], np.random.random(N*M)))
...:
In [119]: %timeit a[(a[1:,0]-a[:-1,0]).nonzero()[0]+1,1]
100 loops, best of 3: 6.31 ms per loop
In [120]: %timeit a[1:][np.diff(a[:,0])!=0,1]
100 loops, best of 3: 4.51 ms per loop
Now, let's say you want to include the first row too. The updated runtimes would look something like this -
In [123]: from itertools import repeat
...: N, M = 100000, 2
...: a = np.array(zip([x for y in zip(*repeat(np.arange(N),M))\
for x in y ], np.random.random(N*M)))
...:
In [124]: %timeit a[np.append(0,(a[1:,0]-a[:-1,0]).nonzero()[0]+1),1]
100 loops, best of 3: 6.8 ms per loop
In [125]: %timeit a[np.append(True,np.diff(a[:,0])!=0),1]
100 loops, best of 3: 5 ms per loop

Ok actually I found a solution, just learned about np.fromiter, which can build a numpy array based on a generator:
d = np.fromiter((a[i+1,1] for i,(x,y) in enumerate(zip(a[1:,0],a[:-1,0])) if x!=y), int)
I think this does it, generates a numpy array without any intermediate arrays. However, the caveat is that it does not seem to be all that efficient! Forgetting what I said in the question about testing:
t = [lambda a: a[(a[1:,0]-a[:-1,0]).nonzero()[0]+1, 1],
lambda a: np.array([a[i+1,1] for i,(x,y) in enumerate(zip(a[1:,0],a[:-1,0])) if x!=y]),
lambda a: np.fromiter((a[i+1,1] for i,(x,y) in enumerate(zip(a[1:,0],a[:-1,0])) if x!=y), int)]
from timeit import Timer
[Timer(x(a)).timeit(number=10) for x in t]
[0.16596235800034265, 1.811289312000099, 2.1662971739997374]
It seems the first solution is drastically faster! I assume this is because even though it generates intermediate data, it is able to perform the inner loop completely in numpy, while in the other it runs Python code for each item in the array.
Like I said, this is why I'm not sure this kind of benchmarking makes sense here -- if accesses to a were much slower, the benchmark wouldn't be CPU-loaded. Thoughts?
Not "accepting" this answer since I am hoping someone can come up with something faster.

If memory efficiency is your concern, that can be solved as such: The only intermediate of the same size-order as the input data can be made of type bool (a[1:,0] != a[:-1, 0]); and if your input data is int32, that is 8 times smaller than 'a' itself. You can count the nonzeros of that binary array to preallocate the output array as well; though that should not be very either significant if the output of the != is as sparse as your example suggests.

Related

what is the quickest way to iterate through a numpy array

I noticed a meaningful difference between iterating through a numpy array "directly" versus iterating through via the tolist method. See timing below:
directly
[i for i in np.arange(10000000)]
via tolist
[i for i in np.arange(10000000).tolist()]
considering I've discovered one way to go faster. I wanted to ask what else might make it go faster?
what is fastest way to iterate through a numpy array?
This is actually not surprising. Let's examine the methods one a time starting with the slowest.
[i for i in np.arange(10000000)]
This method asks python to reach into the numpy array (stored in the C memory scope), one element at a time, allocate a Python object in memory, and create a pointer to that object in the list. Each time you pipe between the numpy array stored in the C backend and pull it into pure python, there is an overhead cost. This method adds in that cost 10,000,000 times.
Next:
[i for i in np.arange(10000000).tolist()]
In this case, using .tolist() makes a single call to the numpy C backend and allocates all of the elements in one shot to a list. You then are using python to iterate over that list.
Finally:
list(np.arange(10000000))
This basically does the same thing as above, but it creates a list of numpy's native type objects (e.g. np.int64). Using list(np.arange(10000000)) and np.arange(10000000).tolist() should be about the same time.
So, in terms of iteration, the primary advantage of using numpy is that you don't need to iterate. Operation are applied in an vectorized fashion over the array. Iteration just slows it down. If you find yourself iterating over array elements, you should look into finding a way to restructure the algorithm you are attempting, in such a way that is uses only numpy operations (it has soooo many built-in!) or if really necessary you can use np.apply_along_axis, np.apply_over_axis, or np.vectorize.
These are my timings on a slower machine
In [1034]: timeit [i for i in np.arange(10000000)]
1 loop, best of 3: 2.16 s per loop
If I generate the range directly (Py3 so this is a genertor) times are much better. Take this a baseline for a list comprehension of this size.
In [1035]: timeit [i for i in range(10000000)]
1 loop, best of 3: 1.26 s per loop
tolist converts the arange to a list first; takes a bit longer, but the iteration is still on a list
In [1036]: timeit [i for i in np.arange(10000000).tolist()]
1 loop, best of 3: 1.6 s per loop
Using list() - same time as direct iteration on the array; that suggests that the direct iteration first does this.
In [1037]: timeit [i for i in list(np.arange(10000000))]
1 loop, best of 3: 2.18 s per loop
In [1038]: timeit np.arange(10000000).tolist()
1 loop, best of 3: 927 ms per loop
same times a iterating on the .tolist
In [1039]: timeit list(np.arange(10000000))
1 loop, best of 3: 1.55 s per loop
In general if you must loop, working on a list is faster. Access to elements of a list is simpler.
Look at the elements returned by indexing.
a[0] is another numpy object; it is constructed from the values in a, but not simply a fetched value
list(a)[0] is the same type; the list is just [a[0], a[1], a[2]]]
In [1043]: a = np.arange(3)
In [1044]: type(a[0])
Out[1044]: numpy.int32
In [1045]: ll=list(a)
In [1046]: type(ll[0])
Out[1046]: numpy.int32
but tolist converts the array into a pure list, in this case, as list of ints. It does more work than list(), but does it in compiled code.
In [1047]: ll=a.tolist()
In [1048]: type(ll[0])
Out[1048]: int
In general don't use list(anarray). It rarely does anything useful, and is not as powerful as tolist().
What's the fastest way to iterate through array - None. At least not in Python; in c code there are fast ways.
a.tolist() is the fastest, vectorized way of creating a list integers from an array. It iterates, but does so in compiled code.
But what is your real goal?
The speedup via tolist only holds for 1D arrays. Once you add a second axis, the performance gain disappears:
1D
import numpy as np
import timeit
num_repeats = 10
x = np.arange(10000000)
via_tolist = timeit.timeit("[i for i in x.tolist()]", number=num_repeats, globals={"x": x})
direct = timeit.timeit("[i for i in x]",number=num_repeats, globals={"x": x})
print(f"tolist: {via_tolist / num_repeats}")
print(f"direct: {direct / num_repeats}")
tolist: 0.430838281600154
direct: 0.49088368080047073
2D
import numpy as np
import timeit
num_repeats = 10
x = np.arange(10000000*10).reshape(-1, 10)
via_tolist = timeit.timeit("[i for i in x.tolist()]", number=num_repeats, globals={"x": x})
direct = timeit.timeit("[i for i in x]", number=num_repeats, globals={"x": x})
print(f"tolist: {via_tolist / num_repeats}")
print(f"direct: {direct / num_repeats}")
tolist: 2.5606724178003786
direct: 1.2158976945000177
My test case has an numpy array
[[ 34 107]
[ 963 144]
[ 921 1187]
[ 0 1149]]
I'm going through this only once using range and enumerate
USING range
loopTimer1 = default_timer()
for l1 in range(0,4):
print(box[l1])
print("Time taken by range: ",default_timer()-loopTimer1)
Result
[ 34 107]
[963 144]
[ 921 1187]
[ 0 1149]
Time taken by range: 0.0005405639985838206
USING enumerate
loopTimer2 = default_timer()
for l2,v2 in enumerate(box):
print(box[l2])
print("Time taken by enumerate: ", default_timer() - loopTimer2)
Result
[ 34 107]
[963 144]
[ 921 1187]
[ 0 1149]
Time taken by enumerate: 0.00025605700102460105
This test case I picked enumerate will works faster

exceptions for numpy arrays

I'm looking to remove certain values within a constant range around values held within a second array. i.e. I have one large np array and I want to remove values +-3 in that array using another array of specific values, say [20,50,90,210]. So if my large array was [14,21,48,54,92,215] I would want [14,54,215] returned. The values are double precision so I'm trying to avoid creating a large mask array to remove specific values and use a range instead.
You mentioned that you wanted to avoid a large mask array. Unless both your "large array" and your "specific values" array are very large, I wouldn't try to avoid this. Often, with numpy it's best to allow relatively large temporary arrays to be created.
However, if you do need to control memory usage more tightly, you have several options. A typical trick is to only vectorize one part of the operation and iterate over the shorter input (this is shown in the second example below). It saves having nested loops in Python, and can significantly decrease the memory usage involved.
I'll show three different approaches. There are several others (including dropping down to C or Cython if you really need tight control and performance), but hopefully this gives you some ideas.
On a side note, for these small inputs, the overhead of array creation will overwhelm the differences. The speed and memory usage I'm referring to is only for large (>~1e6 elements) arrays.
Fully vectorized, but most memory usage
The easiest way is to calculate all distances at once and then reduce the mask back to the same shape as the initial array. For example:
import numpy as np
vals = np.array([14,21,48,54,92,215])
other = np.array([20,50,90,210])
dist = np.abs(vals[:,None] - other[None,:])
mask = np.all(dist > 3, axis=1)
result = vals[mask]
Partially vectorized, intermediate memory usage
Another option is to build up the mask iteratively for each element in the "specific values" array. This iterates over all elements of the shorter "specific values" array (a.k.a. other in this case):
import numpy as np
vals = np.array([14,21,48,54,92,215])
other = np.array([20,50,90,210])
mask = np.ones(len(vals), dtype=bool)
for num in other:
dist = np.abs(vals - num)
mask &= dist > 3
result = vals[mask]
Slowest, but lowest memory usage
Finally, if you really want to reduce memory usage, you could iterate over every item in your large array:
import numpy as np
vals = np.array([14,21,48,54,92,215])
other = np.array([20,50,90,210])
result = []
for num in vals:
if np.all(np.abs(num - other) > 3):
result.append(num)
The temporary list in that case is likely to take up more memory than the mask in the previous version. However, you could avoid the temporary list by using np.fromiter if you wanted. The timing comparison below shows an example of this.
Timing Comparisons
Let's compare the speed of these functions. We'll use 10,000,000 elements in the "large array" and 4 values in the "specific values" array. The relative speed and memory usage of these functions depend strongly on the sizes of the two arrays, so you should only consider this as a vague guideline.
import numpy as np
vals = np.random.random(1e7)
other = np.array([0.1, 0.5, 0.8, 0.95])
tolerance = 0.05
def basic(vals, other, tolerance):
dist = np.abs(vals[:,None] - other[None,:])
mask = np.all(dist > tolerance, axis=1)
return vals[mask]
def intermediate(vals, other, tolerance):
mask = np.ones(len(vals), dtype=bool)
for num in other:
dist = np.abs(vals - num)
mask &= dist > tolerance
return vals[mask]
def slow(vals, other, tolerance):
def func(vals, other, tolerance):
for num in vals:
if np.all(np.abs(num - other) > tolerance):
yield num
return np.fromiter(func(vals, other, tolerance), dtype=vals.dtype)
And in this case, the partially vectorized version wins out. That's to be expected in most cases where vals is significantly longer than other. However, the first example (basic) is almost as fast, and is arguably simpler.
In [7]: %timeit basic(vals, other, tolerance)
1 loops, best of 3: 1.45 s per loop
In [8]: %timeit intermediate(vals, other, tolerance)
1 loops, best of 3: 917 ms per loop
In [9]: %timeit slow(vals, other, tolerance)
1 loops, best of 3: 2min 30s per loop
Either way you choose to implement things, these are common vectorization "tricks" that show up in many problems. In high-level languages like Python, Matlab, R, etc It's often useful to try fully vectorizing, then mix vectorization and explicit loops if memory usage is an issue. Which one is best usually depends on the relative sizes of the inputs, but this is a common pattern to try when optimizing speed vs memory usage in high-level scientific programming.
You can try:
def closestmatch(x, y):
val = np.abs(x-y)
return(val.min()>=3)
Then:
b[np.array([closestmatch(a, x) for x in b])]

Product of a sequence in NumPy

I need to implement this following function with NumPy -
where F_l(x) are N number of arrays that I need to calculate, which are dependent on an array G(x), that I am given, and A_j are N coefficients that are also given. I would like to implement it in NumPy as I would have to calculate F_l(x) for every iteration of my program. The dummy way to do this is by for loops and ifs:
import numpy as np
A = np.arange(1.,5.,1)
G = np.array([[1.,2.],[3.,4.]])
def calcF(G,A):
N = A.size
print A
print N
F = []
for l in range(N):
F.append(G/A[l])
print F[l]
for j in range(N):
if j != l:
F[l]*=((G - A[l])/(G + A[j]))*((A[l] - A[j])/(A[l] + A[j]))
return F
F= calcF(G,A)
print F
As for loops and if statements are relatively slow, I am looking for a NumPy witty way to do the same thing. Anyone has an idea?
Listed in this post is a vectorized solution making heavy usage of NumPy's powerful broadcasting feature after extending dimensions of input arrays to 3D and 4D cases with np.newaxis/None at various places according to the computation involved. Here's the implementation -
# Get size of A
N = A.size
# Perform "(G - A[l])/(G + A[j]))" in a vectorized manner
p1 = (G - A[:,None,None,None])/(G + A[:,None,None])
# Perform "((A[l] - A[j])/(A[l] + A[j]))" in a vectorized manner
p2 = ((A[:,None] - A)/(A[:,None] + A))
# Elementwise multiplications between the previously calculated parts
p3 = p1*p2[...,None,None]
# Set the escaped portion "j != l" output as "G/A[l]"
p3[np.eye(N,dtype=bool)] = G/A[:,None,None]
Fout = p3.prod(1)
# If you need separate arrays just like in the question, split it
Fout_split = np.array_split(Fout,N)
Sample run -
In [284]: # Original inputs
...: A = np.arange(1.,5.,1)
...: G = np.array([[1.,2.],[3.,4.]])
...:
In [285]: calcF(G,A)
Out[285]:
[array([[-0. , -0.00166667],
[-0.01142857, -0.03214286]]), array([[-0.00027778, 0. ],
[ 0.00019841, 0.00126984]]), array([[ 1.26984127e-03, 1.32275132e-04],
[ -0.00000000e+00, -7.93650794e-05]]), array([[-0.00803571, -0.00190476],
[-0.00017857, 0. ]])]
In [286]: vectorized_calcF(G,A) # Posted solution
Out[286]:
[array([[[-0. , -0.00166667],
[-0.01142857, -0.03214286]]]), array([[[-0.00027778, 0. ],
[ 0.00019841, 0.00126984]]]), array([[[ 1.26984127e-03, 1.32275132e-04],
[ -0.00000000e+00, -7.93650794e-05]]]), array([[[-0.00803571, -0.00190476],
[-0.00017857, 0. ]]])]
Runtime test -
In [289]: # Larger inputs
...: A = np.random.randint(1,500,(400))
...: G = np.random.randint(1,400,(20,20))
...:
In [290]: %timeit calcF(G,A)
1 loops, best of 3: 4.46 s per loop
In [291]: %timeit vectorized_calcF(G,A) # Posted solution
1 loops, best of 3: 1.87 s per loop
Vectorization with NumPy/MATLAB : General approach
Felt like I could throw in my two cents on my general approach and I would think others follow similar strategies when trying to vectorize codes, especially in a high level platform like NumPy or MATLAB. So, here's a quick check-list of things that could be considered for Vectorization -
Idea about extending the dimensions : Dimensions are to be extended for the input arrays such that the new dimensions hold results that would have otherwise gotten generated iteratively within the nested loops.
Where to start vectorizing from? Start from the deepest (that loop where the code is iterating the most) stage of computation and see how inputs could be extended and the relevant computation could be brought in. Take good care of tracing the iterators involved and extend dimensions accordingly. Move outwards onto outer loops, until you are satisfied with the vectorization done.
How to take care of conditional statements? For simple cases, brute force compute everything and see how the IF/ELSE parts could be taken care of later on. This would be highly context specific.
Are there dependencies? If so, see if the dependencies could be traced and implemented accordingly. This could form another topic for discussion, but here are few examples I got myself involved with.

Scipy.sparse.csr_matrix: How to get top ten values and indices?

I have a large csr_matrix and I am interested in the top ten values and their indices each row. But I did not find a decent way to manipulate the matrix.
Here is my current solution and the main idea is to process them row by row:
row = csr_matrix.getrow(row_number).toarray()[0].ravel()
top_ten_indicies = row.argsort()[-10:]
top_ten_values = row[row.argsort()[-10:]]
By doing this, the advantages of csr_matrix is not fully used. It's more like a brute force solution.
I don't see what the advantages of csr format are in this case. Sure, all the nonzero values are collected in one .data array, with the corresponding column indexes in .indices. But they are in blocks of varying length. And that means they can't be processed in parallel or with numpy array strides.
One solution is the pad those blocks into common length blocks. That's what .toarray() does. Then you can find the maximum values with argsort(axis=1) or withargpartition`.
Another is to break them into row sized blocks, and process each of those. That's what you are doing with the .getrow. Another way of breaking them up is convert to lil format, and process the sublists of the .data and .rows arrays.
A possible third option is to use the ufunc reduceat method. This lets you apply ufunc reduction methods to sequential blocks of an array. There are established ufunc like np.add that take advantage of this. argsort is not such a function. But there is a way of constructing a ufunc from a Python function, and gain some modest speed over regular Python iteration. [I need to look up a recent SO question that illustrates this.]
I'll illustrate some of this with a simpler function, sum over rows.
If A2 is a csr matrix.
A2.sum(axis=1) # the fastest compile csr method
A2.A.sum(axis=1) # same, but with a dense intermediary
[np.sum(l.data) for l in A2] # iterate over the rows of A2
[np.sum(A2.getrow(i).data) for i in range(A2.shape[0])] # iterate with index
[np.sum(l) for l in A2.tolil().data] # sum the sublists of lil format
np.add.reduceat(A2.data, A2.indptr[:-1]) # with reduceat
A2.sum(axis=1) is implemented as a matrix multiplication. That's not relevant to the sort problem, but still an interesting way of looking at the summation problem. Remember csr format was developed for efficient multiplication.
For a my current sample matrix (created for another SO sparse question)
<8x47752 sparse matrix of type '<class 'numpy.float32'>'
with 32 stored elements in Compressed Sparse Row format>
some comparative times are
In [694]: timeit np.add.reduceat(A2.data, A2.indptr[:-1])
100000 loops, best of 3: 7.41 µs per loop
In [695]: timeit A2.sum(axis=1)
10000 loops, best of 3: 71.6 µs per loop
In [696]: timeit [np.sum(l) for l in A2.tolil().data]
1000 loops, best of 3: 280 µs per loop
Everything else is 1ms or more.
I suggest focusing on developing your one-row function, something like:
def max_n(row_data, row_indices, n):
i = row_data.argsort()[-n:]
# i = row_data.argpartition(-n)[-n:]
top_values = row_data[i]
top_indices = row_indices[i] # do the sparse indices matter?
return top_values, top_indices, i
Then see how if fits in one of these iteration methods. tolil() looks most promising.
I haven't addressed the question of how to collect these results. Should they be lists of lists, array with 10 columns, another sparse matrix with 10 values per row, etc.?
sorting each row of a large sparse & saving top K values & column index - Similar question from several years back, but unanswered.
Argmax of each row or column in scipy sparse matrix - Recent question seeking argmax for rows of csr. I discuss some of the same issues.
how to speed up loop in numpy? - example of how to use np.frompyfunc to create a ufunc. I don't know if the resulting function has the .reduceat method.
Increasing value of top k elements in sparse matrix - get the top k elements of csr (not by row). Case for argpartition.
The row summation implemented with np.frompyfunc:
In [741]: def foo(a,b):
return a+b
In [742]: vfoo=np.frompyfunc(foo,2,1)
In [743]: timeit vfoo.reduceat(A2.data,A2.indptr[:-1],dtype=object).astype(float)
10000 loops, best of 3: 26.2 µs per loop
That's respectable speed. But I can't think of a way of writing a binary function (takes to 2 arguments) that would implement argsort via reduction. So this is probably a deadend for this problem.
Just to answer the original question (for people like me who found this question looking for copy-pasta), here's a solution using multiprocessing based on #hpaulj's suggestion of converting to lil_matrix, and iterating over rows
from multiprocessing import Pool
def _top_k(args):
"""
Helper function to process a single row of top_k
"""
data, row = args
data, row = zip(*sorted(zip(data, row), reverse=True)[:k])
return data, row
def top_k(m, k):
"""
Keep only the top k elements of each row in a csr_matrix
"""
ml = m.tolil()
with Pool() as p:
ms = p.map(_top_k, zip(ml.data, ml.rows))
ml.data, ml.rows = zip(*ms)
return ml.tocsr()
One would require to iterate over the rows and get the top indices for each row separately. But this loop can be jited(and parallelized) to get extremely fast function.
#nb.njit(cache=True)
def row_topk_csr(data, indices, indptr, K):
m = indptr.shape[0] - 1
max_indices = np.zeros((m, K), dtype=indices.dtype)
max_values = np.zeros((m, K), dtype=data.dtype)
for i in nb.prange(m):
top_inds = np.argsort(data[indptr[i] : indptr[i + 1]])[::-1][:K]
max_indices[i] = indices[indptr[i] : indptr[i + 1]][top_inds]
max_values[i] = data[indptr[i] : indptr[i + 1]][top_inds]
return max_indices, max_values
Call it like this:
top_pred_indices, _ = row_topk_csr(csr_mat.data, csr_mat.indices, csr_mat.indptr, K)
I need to frequently perform this operation, and this function is fast enough for me, executes in <1s on 1mil x 400k sparse matrix.
HTH.

Fastest way of finding the index of the closest element in a non-sorted Python list of floats

Given as input a list of floats that is not sorted, what would be the most efficient way of finding the index of the closest element to a certain value? Some potential solutions come to mind:
For:
x = random.sample([float(i) for i in range(1000000)], 1000000)
1) Own function:
def min_val(lst, val):
min_i = None
min_dist = 1000000.0
for i, v in enumerate(lst):
d = abs(v - val)
if d < min_dist:
min_dist = d
min_i = i
return min_i
Result:
%timeit min_val(x, 5000.56)
100 loops, best of 3: 11.5 ms per loop
2) Min
%timeit min(range(len(x)), key=lambda i: abs(x[i]-5000.56))
100 loops, best of 3: 16.8 ms per loop
3) Numpy (including conversion)
%timeit np.abs(np.array(x)-5000.56).argmin()
100 loops, best of 3: 3.88 ms per loop
From that test, it seems that converting the list to numpy array is the best solution. However two questions come to mind:
Was that indeed a realistic comparison?
Is the numpy solution the fastest way to achieve this in Python?
Consider the partition algorithm from QuickSort. The partition algorithm rearranges a list such that the pivot element is in its final location after invocation. Based on the value of the pivot, you could then partition the portion of the array that is likely to contain the element closest to your target. Once you've either found the element you're after or a have a partition of length 1 that contains your element you're done.
The general problem you're addressing is a selection problem.
In your question you were wondering about what sort of array/list implementation to use, and that will have an impact on performance. A bigger factor will be the search algorithm as opposed to the list/array representation.
Edit in light of comment from #Andrzej
Ah, then I misunderstood your question. Strictly speaking, linear search is always O(n), so efficiency within the bounds of Big-Oh analysis is the same regardless of underlying data structure. The gotcha here is that for linear search you want a nice simple data structure to make the run-time performance as good as possible.
A Python list is an array of references to objects while (to my understanding) a Numpy array is a contiguous array of objects. The Numpy array will perform better since it doesn't have to dereference the objects to get to the values.
Your comparison technique seems reasonable for Python list vs. Numpy array. I'd be reluctant to say that a Numpy array is the fastest way to solve the problem, but it should perform better than a Python list.

Categories

Resources