Find index of first element with condition in numpy array in python - python

I'm trying find the index of the first element bigger than a threshold, like this:
index = 0
while timeStamps[index] < self.stopCount and index < len(timeStamps):
index += 1
Can this be done in a one-liner? I found:
index = next((x for x in timeStamps if x <= self.stopCount), 0)
I'm not sure what this expression does and it seems to return 0 always... Could somebody point out the error and explain the expression?

Another option is to use np.argmax (see this post for details). So your code would become something like
(timeStamps > self.stopCount).argmax()
the caveat is that if the condition is never satisfied the argmax will return 0.

I would do it this way:
import numpy as np
threshold = 20
sample_array = np.array([10,11,12,13,21,200,1,2])
idx = np.array([np.where(sample_array > threshold)]).min()
print(idx)
#4

this one liner will work
sample_array = np.array([10,11,12,13,21,200,1,2])
# oneliner
print(sum(np.cumsum(arr>threshold)==0))
np.cumsum(sample_array>threshold)==0) will have value 0 until element is bigger than threshold

Related

Find and replace specific value in numpy ndarray?

I want to iterate through a numpy ndarray and, if any values are less than X, replace one of them with X.
I have tried doing array_name[ array_name < X] = X but this replaces all of the values that are less than X.
I can use a for loop, but I feel like there's probably a more concise way already bundled with numpy.
for i in array_name:
if i < X:
i = X
break
Is there a way to do this more elegantly?
array_name < X
Returns same array but with True or False. Then you can just pick an index where the cell is True
idx = np.argwhere(array_name < X)[i]
array_name[idx] = value
Here, you can choose i arbitrarily

First element of series to cross threshold in numpy, with handling of series that never cross

I have a numpy array of N time series of length T. I want the index at which each first crosses some threshold, and a -1 or something similar if it never crosses. Take ts_array = np.randn(N, T)
np.argmax(ts_array > cutoff, axis=1) gets close, but it returns a 0 for both time series that cross the threshold at time 0, and time series that never cross.
np.where(...) and np.nonzero(...) are possibilities, but their return values would require rather gruesome handling to extract the vector in which I'm interested
This question is similar to Numpy first occurence of value greater than existing value but none of the answers there solve it.
One liner:
(ts > c).argmax() if (ts > c).any() else -1
assuming ts = ts_array and c = cutoff
Otherwise:
Use argmax() and any()
np.random.seed([3,1415])
def xover(ts, cut):
x = ts > cut
return x.argmax() if x.any() else -1
ts_array = np.random.random(5).round(4)
ts_array looks like:
print ts_array, '\n'
[ 0.4449 0.4076 0.4601 0.4652 0.4627]
Various checks:
print xover(ts_array, 0.400), '\n'
0
print xover(ts_array, 0.460), '\n'
2
print xover(ts_array, 0.465), '\n'
3
print xover(ts_array, 1.000), '\n'
-1
It's not too bad with np.where. I would use the following as a starting point:
ts_array = np.random.rand(10, 10)
cutoff = 0.5
# Get a list of all indices that satisfy the condition
rows, cols = np.where(ts_array > cutoff)
if len(rows) > 0:
index = (rows[0], cols[0])
else:
index = -1
Note that np.where returns two arrays, a list of row indices and a list of column indices. They are matched, so choosing the first one of each array will give us the first instance where the values are above the cutoff. I don't have a nice one-liner, but the handling code isn't too bad. It should be easily adaptable to your situation.

Return elements in a location corresponding to the minimum values of another array

I have two arrays with the same shape in the first two dimensions and I'm looking to record the minimum value in each row of the first array. However I would also like to record the elements in the corresponding position in the third dimension of the second array. I can do it like this:
A = np.random.random((5000, 100))
B = np.random.random((5000, 100, 3))
A_mins = np.ndarray((5000, 4))
for i, row in enumerate(A):
current_min = min(row)
A_mins[i, 0] = current_min
A_mins[i, 1:] = B[i, row == current_min]
I'm new to programming (so correct me if I'm wrong) but I understand that with Numpy doing calculations on whole arrays is faster than iterating over them. With this in mind is there a faster way of doing this? I can't see a way to get rid of the row == current_min bit even though the location of the minimum point must have been 'known' to the computer when it was calculating the min().
Any tips/suggestions appreciated! Thanks.
Something along what #lib talked about:
index = np.argmin(A, axis=1)
A_mins[:,0] = A[np.arange(len(A)), index]
A_mins[:,1:] = B[np.arange(len(A)), index]
It is much faster than using a for loop.
For getting the index of the minimum value, use amin instead of min + comparison
The amin function (and many other functions in numpy) also takes the argument axis, that you can use to get the minimum of each row or each column.
See http://docs.scipy.org/doc/numpy/reference/generated/numpy.amin.html

How can I find where a vector value passes a threshold and the n+1 value meets a condition?

Here's what I have so far:
locs = np.where((v_1 >= 1.9) & (v_1 <= 2.00))
I just want to add the condition that the next element (after the one where v_1 meets the current conditions) is larger than the found element.
Thanks in advance!
Well, I solved it myself!
(see Finding local maxima/minima with Numpy in a 1D numpy array )
locs = (np.diff(np.sign(np.diff(v_1))) < 0).nonzero()[0] +1

Find large number of consecutive values fulfilling condition in a numpy array

I have some audio data loaded in a numpy array and I wish to segment the data by finding silent parts, i.e. parts where the audio amplitude is below a certain threshold over a period in time.
An extremely simple way to do this is something like this:
values = ''.join(("1" if (abs(x) < SILENCE_THRESHOLD) else "0" for x in samples))
pattern = re.compile('1{%d,}'%int(MIN_SILENCE))
for match in pattern.finditer(values):
# code goes here
The code above finds parts where there are at least MIN_SILENCE consecutive elements smaller than SILENCE_THRESHOLD.
Now, obviously, the above code is horribly inefficient and a terrible abuse of regular expressions. Is there some other method that is more efficient, but still results in equally simple and short code?
Here's a numpy-based solution.
I think (?) it should be faster than the other options. Hopefully it's fairly clear.
However, it does require a twice as much memory as the various generator-based solutions. As long as you can hold a single temporary copy of your data in memory (for the diff), and a boolean array of the same length as your data (1-bit-per-element), it should be pretty efficient...
import numpy as np
def main():
# Generate some random data
x = np.cumsum(np.random.random(1000) - 0.5)
condition = np.abs(x) < 1
# Print the start and stop indices of each region where the absolute
# values of x are below 1, and the min and max of each of these regions
for start, stop in contiguous_regions(condition):
segment = x[start:stop]
print start, stop
print segment.min(), segment.max()
def contiguous_regions(condition):
"""Finds contiguous True regions of the boolean array "condition". Returns
a 2D array where the first column is the start index of the region and the
second column is the end index."""
# Find the indicies of changes in "condition"
d = np.diff(condition)
idx, = d.nonzero()
# We need to start things after the change in "condition". Therefore,
# we'll shift the index by 1 to the right.
idx += 1
if condition[0]:
# If the start of condition is True prepend a 0
idx = np.r_[0, idx]
if condition[-1]:
# If the end of condition is True, append the length of the array
idx = np.r_[idx, condition.size] # Edit
# Reshape the result into two columns
idx.shape = (-1,2)
return idx
main()
There is a very convenient solution to this using scipy.ndimage. For an array:
a = np.array([1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0])
which can be the result of a condition applied to another array, finding the contiguous regions is as simple as:
regions = scipy.ndimage.find_objects(scipy.ndimage.label(a)[0])
Then, applying any function to those regions can be done e.g. like:
[np.sum(a[r]) for r in regions]
Slightly sloppy, but simple and fast-ish, if you don't mind using scipy:
from scipy.ndimage import gaussian_filter
sigma = 3
threshold = 1
above_threshold = gaussian_filter(data, sigma=sigma) > threshold
The idea is that quiet portions of the data will smooth down to low amplitude, and loud regions won't. Tune 'sigma' to affect how long a 'quiet' region must be; tune 'threshold' to affect how quiet it must be. This slows down for large sigma, at which point using FFT-based smoothing might be faster.
This has the added benefit that single 'hot pixels' won't disrupt your silence-finding, so you're a little less sensitive to certain types of noise.
I haven't tested this but you it should be close to what you are looking for. Slightly more lines of code but should be more efficient, readable, and it doesn't abuse regular expressions :-)
def find_silent(samples):
num_silent = 0
start = 0
for index in range(0, len(samples)):
if abs(samples[index]) < SILENCE_THRESHOLD:
if num_silent == 0:
start = index
num_silent += 1
else:
if num_silent > MIN_SILENCE:
yield samples[start:index]
num_silent = 0
if num_silent > MIN_SILENCE:
yield samples[start:]
for match in find_silent(samples):
# code goes here
This should return a list of (start,length) pairs:
def silent_segs(samples,threshold,min_dur):
start = -1
silent_segments = []
for idx,x in enumerate(samples):
if start < 0 and abs(x) < threshold:
start = idx
elif start >= 0 and abs(x) >= threshold:
dur = idx-start
if dur >= min_dur:
silent_segments.append((start,dur))
start = -1
return silent_segments
And a simple test:
>>> s = [-1,0,0,0,-1,10,-10,1,2,1,0,0,0,-1,-10]
>>> silent_segs(s,2,2)
[(0, 5), (9, 5)]
another way to do this quickly and concisely:
import pylab as pl
v=[0,0,1,1,0,0,1,1,1,1,1,0,1,0,1,1,0,0,0,0,0,1,0,0]
vd = pl.diff(v)
#vd[i]==1 for 0->1 crossing; vd[i]==-1 for 1->0 crossing
#need to add +1 to indexes as pl.diff shifts to left by 1
i1=pl.array([i for i in xrange(len(vd)) if vd[i]==1])+1
i2=pl.array([i for i in xrange(len(vd)) if vd[i]==-1])+1
#corner cases for the first and the last element
if v[0]==1:
i1=pl.hstack((0,i1))
if v[-1]==1:
i2=pl.hstack((i2,len(v)))
now i1 contains the beginning index and i2 the end index of 1,...,1 areas
#joe-kington I've got about 20%-25% speed improvement over np.diff / np.nonzero solution by using argmax instead (see code below, condition is boolean)
def contiguous_regions(condition):
idx = []
i = 0
while i < len(condition):
x1 = i + condition[i:].argmax()
try:
x2 = x1 + condition[x1:].argmin()
except:
x2 = x1 + 1
if x1 == x2:
if condition[x1] == True:
x2 = len(condition)
else:
break
idx.append( [x1,x2] )
i = x2
return idx
Of course, your mileage may vary depending on your data.
Besides, I'm not entirely sure, but i guess numpy may optimize argmin/argmax over boolean arrays to stop searching on first True/False occurrence. That might explain it.
I know I'm late to the party, but another way to do this is with 1d convolutions:
np.convolve(sig > threshold, np.ones((cons_samples)), 'same') == cons_samples
Where cons_samples is the number of consecutive samples you require above threshold

Categories

Resources