I have thousands of time series (24 dimensional data -- 1 dimension for each hour of the day). Out of these time series, I'm interested in a particular sub-sequence or pattern that looks like this:
I'm interested in sub-sequences that resemble the overall shape of the highlighted section -- that is, a sub-sequence with a sharp negative slope, followed by a period of several hours where the slope is relatively flat before finally ending with a sharp positive slope. I know the sub-sequences I'm interested in won't match each other exactly and most likely will be shifted in time, scaled differently, have longer/shorter periods where the slope is relatively flat, etc. but I would like to find a way to detect them all.
To do this, I have developed a simple Heuristic (based on my definition of the highlighted section) to quickly find some of the sub-sequences of interest. However, I was wondering if there was a more elegant way (in Python) to search thousands of time series for the sub-sequence I'm interested in (while taking into account things mentioned above -- differences in time, scale, etc.)?
Edit: a year later I cannot believe how much I overcomplicated flatline and slope detection; stumbling on the same question, I realized it's as simple as
idxs = np.where(x[1:] - x[:-1] == 0)
idxs = [i for idx in idxs for i in (idx, idx + 1)]
First line is implemented efficiently via np.diff(x); further, to e.g. detect slope > 5, use np.diff(x) > 5. The second line is since differencing tosses out right endpoints (e.g. diff([5,6,6,6,7]) = [1,0,0,1] -> idxs=[1,2], excludes 3,.
Functions below should do; code written with intuitive variable & method names, and should be self-explanatory with some readovers. The code is efficient and scalable.
Functionalities:
Specify min & max flatline length
Specify min & max slopes for left & right tails
Specify min & max average slopes for left & right tails, over multiple intervals
Example:
import numpy as np
import matplotlib.pyplot as plt
# Toy data
t = np.array([[ 5, 3, 3, 5, 3, 3, 3, 3, 3, 5, 5, 3, 3, 0, 4,
1, 1, -1, -1, 1, 1, 1, 1, -1, 1, 1, -1, 0, 3, 3,
5, 5, 3, 3, 3, 3, 3, 5, 7, 3, 3, 5]]).T
plt.plot(t)
plt.show()
# Get flatline indices
indices = get_flatline_indices(t, min_len=4, max_len=5)
plt.plot(t)
for idx in indices:
plt.plot(idx, t[idx], marker='o', color='r')
plt.show()
# Filter by edge slopes
lims_left = (-10, -2)
lims_right = (2, 10)
averaging_intervals = [1, 2, 3]
indices_filtered = filter_by_tail_slopes(indices, t, lims_left, lims_right,
averaging_intervals)
plt.plot(t)
for idx in indices_filtered:
plt.plot(idx, t[idx], marker='o', color='r')
plt.show()
def get_flatline_indices(sequence, min_len=2, max_len=6):
indices=[]
elem_idx = 0
max_elem_idx = len(sequence) - min_len
while elem_idx < max_elem_idx:
current_elem = sequence[elem_idx]
next_elem = sequence[elem_idx+1]
flatline_len = 0
if current_elem == next_elem:
while current_elem == next_elem:
flatline_len += 1
next_elem = sequence[elem_idx + flatline_len]
if flatline_len >= min_len:
if flatline_len > max_len:
flatline_len = max_len
trim_start = elem_idx
trim_end = trim_start + flatline_len
indices_to_append = [index for index in range(trim_start, trim_end)]
indices += indices_to_append
elem_idx += flatline_len
flatline_len = 0
else:
elem_idx += 1
return indices if not all([(entry == []) for entry in indices]) else []
def filter_by_tail_slopes(indices, data, lims_left, lims_right, averaging_intervals=1):
indices_filtered = []
indices_temp, tails_temp = [], []
got_left, got_right = False, False
for idx in indices:
slopes_left, slopes_right = _get_slopes(data, idx, averaging_intervals)
for tail_left, slope_left in enumerate(slopes_left):
if _valid_slope(slope_left, lims_left):
if got_left:
indices_temp = [] # discard prev if twice in a row
tails_temp = []
indices_temp.append(idx)
tails_temp.append(tail_left + 1)
got_left = True
if got_left:
for edge_right, slope_right in enumerate(slopes_right):
if _valid_slope(slope_right, lims_right):
if got_right:
indices_temp.pop(-1)
tails_temp.pop(-1)
indices_temp.append(idx)
tails_temp.append(edge_right + 1)
got_right = True
if got_left and got_right:
left_append = indices_temp[0] - tails_temp[0]
right_append = indices_temp[1] + tails_temp[1]
indices_filtered.append(_fill_range(left_append, right_append))
indices_temp = []
tails_temp = []
got_left, got_right = False, False
return indices_filtered
def _get_slopes(data, idx, averaging_intervals):
if type(averaging_intervals) == int:
averaging_intervals = [averaging_intervals]
slopes_left, slopes_right = [], []
for interval in averaging_intervals:
slopes_left += [(data[idx] - data[idx-interval]) / interval]
slopes_right += [(data[idx+interval] - data[idx]) / interval]
return slopes_left, slopes_right
def _valid_slope(slope, lims):
min_slope, max_slope = lims
return (slope >= min_slope) and (slope <= max_slope)
def _fill_range(_min, _max):
return [i for i in range(_min, _max + 1)]
Related
I am looking for a way to speed up the specific operation on tensors in PyTorch. Since it is a general operation on matrices, I am open to answers in NumPy as well.
Let's say I have a tensor with values from 0 to N-1 (N=4) where each value repeats the same number of times (R=2).
import torch
x = torch.Tensor([0, 0, 1, 1, 2, 2, 3, 3])
In this case, it is sorted, but any permutation of x is also in the set of considered tensors X.
I am getting an input tensor with values from 0 to N-1 but without any constraints on the repetition.
z = torch.tensor([3, 2, 3, 0, 2, 3, 1, 2])
And I would like to find an efficient implementation of foo such that y = foo(z). y should be some permutation of x (from the set X) that tries to do as few changes in z as possible (in terms of Hamming distance), for example
y = torch.tensor([3, 2, 3, 0, 2, 0, 1, 1])
The trivial solution is to keep counting the number elements with the same value, but it is extremely inefficient to process elements one-by-one for larger tensors:
def foo(z):
R = 2
N = 4
counters = [0] * N
# first, we replace extra elements with -1
y = []
for elem in z:
if counters[elem] < R:
counters[elem] += 1
y.append(elem)
else:
y.append(-1)
y = torch.tensor(y)
assert torch.equal(y, torch.tensor([3, 2, 3, 0, 2, -1, 1, -1]))
# second, we replace -1 by "unfilled" counters
for i in range(len(y)):
if y[i] == -1:
first_unfilled = [n for n in range(N) if counters[n] < R][0]
counters[first_unfilled] += 1
y[i] = first_unfilled
return y
assert torch.equal(y, foo(z))
I want to convert [0, 0, 1, 0, 1, 0, 1, 0] to [2, 4, 6] using ortools.
Where "2", "4", "6" in the second list are the index of "1" in the first list.
Using the below code I could get a list [0, 0, 2, 0, 4, 0, 6, 0]. How can I get [2, 4, 6]?
from ortools.sat.python import cp_model
model = cp_model.CpModel()
solver = cp_model.CpSolver()
work = {}
days = 8
horizon = 7
for i in range(days):
work[i] = model.NewBoolVar("work(%i)" % (i))
model.Add(work[0] == 0)
model.Add(work[1] == 0)
model.Add(work[2] == 1)
model.Add(work[3] == 0)
model.Add(work[4] == 1)
model.Add(work[5] == 0)
model.Add(work[6] == 1)
model.Add(work[7] == 0)
v1 = [model.NewIntVar(0, horizon, "") for _ in range(days)]
for d in range(days):
model.Add(v1[d] == d * work[d])
status = solver.Solve(model)
print("status:", status)
vec = []
for i in range(days):
vec.append(solver.Value(work[i]))
print("work",vec)
vec = []
for v in v1:
vec.append(solver.Value(v))
print("vec1",vec)
You should see this output on the console,
status: 4
work [0, 0, 1, 0, 1, 0, 1, 0]
vec1 [0, 0, 2, 0, 4, 0, 6, 0]
Thank you.
Edit:
I also wish to get a result as [4, 6, 2].
For just 3 variables, this is easy. In pseudo code:
The max index is max(work[i] * i)
The min index is min(horizon - (horizon - i) * work[i])
The medium is sum(i * work[i]) - max_index - min_index
But that is cheating.
If you want more that 3 variable, you will need parallel arrays of Boolean variables that indicate the rank of each variable.
Let me sketch the full solution.
You need to build a graph. The X axis are the variables. The why axis are the ranks. You have horizontal arcs going right, and diagonal arcs going right and up. If the variable is selected, you need to use a diagonal arc, otherwise an horizontal arc.
If using a diagonal arc, you will assign the current variable to the rank of the tail of the arc.
Then you need to add constraints to make it a contiguous path:
mass conservation at each node
variable is selected -> one of the diagonal arc must be selected
variable is not selected -> one of the horizontal arc must be selected
bottom left node has one outgoing arc
top right node has one incoming arc
I am working with a spectrum in Python and I have fit a line to that spectrum. I want a code that can detect if there have been let's say, 10, data points on the spectrum in a row that are less than the fitted line. Does anyone know how a simple and quick way to do this?
I currently have something like this:
count = 0
for i in range(lowerbound, upperbound):
if spectrum[i] < fittedline[i]
count += 1
if count > 15:
*do whatever*
If I changed the first if statement line to be:
if spectrum[i] < fittedline[i] & spectrum[i+1] < fittedline[i+1] & so on
I'm sure the algorithm would work, but is there a smarter way for me to automate this in the case where I want the user to input a number for how many data points in a row must be less than the fitted line?
Your attempt is pretty close to working! For consecutive points, all you need to do is reset the count if one point doesn't satisfy your condition.
num_points = int(input("How many points must be less than the fitted line? "))
count = 0
for i in range(lowerbound, upperbound):
if spectrum[i] < fittedline[i]:
count += 1
else: # If the current point is NOT below the threshold, reset the count
count = 0
if count >= num_points:
print(f"{count} consecutive points found at location {i-count+1}-{i}!")
Let's test this:
lowerbound = 0
upperbound = 10
num_points = 5
spectrum = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
fittedline = [1, 2, 10, 10, 10, 10, 10, 8, 9, 10]
Running the code with these values gives:
5 consecutive points found at location 2-6!
My recommendation would be to research and use existing libraries before developing ad-hoc functionality
In this case some super smart people developed numerical python library numpy. This library, widely use in science projects, has a ton of useful functionality implementations of the shelf that are tested and optimized
Your needs can be covered with the following line:
number_of_points = (np.array(spectrum) < np.array(fittedline)).sum()
But lets go step by step:
spectrum = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
fittedline = [1, 2, 10, 10, 10, 10, 10, 8, 9, 10]
# Import numerical python module
import numpy as np
# Convert your lists to numpy arrays
spectrum_array = np.array(spectrum)
gittedline_array = np.array(fittedline)
# Substract fitted line to spectrum
difference = spectrum_array - gittedline_array
#>>> array([ 0, 0, -7, -6, -5, -4, -3, 0, 0, 0])
# Identify points where condition is met
condition_check_array = difference < 0.0
# >>> array([False, False, True, True, True, True, True, False, False, False])
# Get the number of points where condition is met
number_of_points = condition_check_array.sum()
# >>> 5
# Get index of points where condition is met
index_of_points = np.where(difference < 0)
# >>> (array([2, 3, 4, 5, 6], dtype=int64),)
print(f"{number_of_points} points found at location {index_of_points[0][0]}-{index_of_points[0][-1]}!")
# Now same functionality in a simple function
def get_point_count(spectrum, fittedline):
return (np.array(spectrum) < np.array(fittedline)).sum()
get_point_count(spectrum, fittedline)
Now let's consider instead of having 10 points in your spectrum, you have 10M. Code efficience is a key thing to consider and numpy can save help there:
number_of_samples = 1000000
spectrum = [1] * number_of_samples
# >>> [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...]
fittedline = [0] * number_of_samples
fittedline[2:7] =[2] * 5
# >>> [0, 0, 2, 2, 2, 2, 2, 0, 0, 0, ...]
# With numpy
start_time = time.time()
number_of_points = (np.array(spectrum) < np.array(fittedline)).sum()
numpy_time = time.time() - start_time
print("--- %s seconds ---" % (numpy_time))
# With ad hoc loop and ifs
start_time = time.time()
count=0
for i in range(0, len(spectrum)):
if spectrum[i] < fittedline[i]:
count += 1
else: # If the current point is NOT below the threshold, reset the count
count = 0
adhoc_time = time.time() - start_time
print("--- %s seconds ---" % (adhoc_time))
print("Ad hoc is {:3.1f}% slower".format(100 * (adhoc_time / numpy_time - 1)))
number_of_samples = 1000000
spectrum = [1] * number_of_samples
# >>> [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...]
fittedline = [0] * number_of_samples
fittedline[2:7] =[2] * 5
# >>> [0, 0, 2, 2, 2, 2, 2, 0, 0, 0, ...]
# With numpy
start_time = time.time()
number_of_points = (np.array(spectrum) < np.array(fittedline)).sum()
numpy_time = time.time() - start_time
print("--- %s seconds ---" % (numpy_time))
# With ad hoc loop and ifs
start_time = time.time()
count=0
for i in range(0, len(spectrum)):
if spectrum[i] < fittedline[i]:
count += 1
else: # If the current point is NOT below the threshold, reset the count
count = 0
adhoc_time = time.time() - start_time
print("--- %s seconds ---" % (adhoc_time))
print("Ad hoc is {:3.1f}% slower".format(100 * (adhoc_time / numpy_time - 1)))
>>>--- 0.20999646186828613 seconds ---
>>>--- 0.28800177574157715 seconds ---
>>>Ad hoc is 37.1% slower
I have two sorted, numpy arrays similar to these ones:
x = np.array([1, 2, 8, 11, 15])
y = np.array([1, 8, 15, 17, 20, 21])
Elements never repeat in the same array. I want to figure out a way of pythonicaly figuring out a list of indexes that contain the locations in the arrays at which the same element exists.
For instance, 1 exists in x and y at index 0. Element 2 in x doesn't exist in y, so I don't care about that item. However, 8 does exist in both arrays - in index 2 in x but index 1 in y. Similarly, 15 exists in both, in index 4 in x, but index 2 in y. So the outcome of my function would be a list that in this case returns [[0, 0], [2, 1], [4, 2]].
So far what I'm doing is:
def get_indexes(x, y):
indexes = []
for i in range(len(x)):
# Find index where item x[i] is in y:
j = np.where(x[i] == y)[0]
# If it exists, save it:
if len(j) != 0:
indexes.append([i, j[0]])
return indexes
But the problem is that arrays x and y are very large (millions of items), so it takes quite a while. Is there a better pythonic way of doing this?
Without Python loops
Code
def get_indexes_darrylg(x, y):
' darrylg answer '
# Use intersect to find common elements between two arrays
overlap = np.intersect1d(x, y)
# Indexes of common elements in each array
loc1 = np.searchsorted(x, overlap)
loc2 = np.searchsorted(y, overlap)
# Result is the zip two 1d numpy arrays into 2d array
return np.dstack((loc1, loc2))[0]
Usage
x = np.array([1, 2, 8, 11, 15])
y = np.array([1, 8, 15, 17, 20, 21])
result = get_indexes_darrylg(x, y)
# result[0]: array([[0, 0],
[2, 1],
[4, 2]], dtype=int64)
Timing Posted Solutions
Results show that darrlg code has the fastest run time.
Code Adjustment
Each posted solution as a function.
Slight mod so that each solution outputs an numpy array.
Curve named after poster
Code
import numpy as np
import perfplot
def create_arr(n):
' Creates pair of 1d numpy arrays with half the elements equal '
max_val = 100000 # One more than largest value in output arrays
arr1 = np.random.randint(0, max_val, (n,))
arr2 = arr1.copy()
# Change half the elements in arr2
all_indexes = np.arange(0, n, dtype=int)
indexes = np.random.choice(all_indexes, size = n//2, replace = False) # locations to make changes
np.put(arr2, indexes, np.random.randint(0, max_val, (n//2, ))) # assign new random values at change locations
arr1 = np.sort(arr1)
arr2 = np.sort(arr2)
return (arr1, arr2)
def get_indexes_lllrnr101(x,y):
' lllrnr101 answer '
ans = []
i=0
j=0
while (i<len(x) and j<len(y)):
if x[i] == y[j]:
ans.append([i,j])
i += 1
j += 1
elif (x[i]<y[j]):
i += 1
else:
j += 1
return np.array(ans)
def get_indexes_joostblack(x, y):
'joostblack'
indexes = []
for idx,val in enumerate(x):
idy = np.searchsorted(y,val)
try:
if y[idy]==val:
indexes.append([idx,idy])
except IndexError:
continue # ignore index errors
return np.array(indexes)
def get_indexes_mustafa(x, y):
indices_in_x = np.flatnonzero(np.isin(x, y)) # array([0, 2, 4])
indices_in_y = np.flatnonzero(np.isin(y, x[indices_in_x])) # array([0, 1, 2]
return np.array(list(zip(indices_in_x, indices_in_y)))
def get_indexes_darrylg(x, y):
' darrylg answer '
# Use intersect to find common elements between two arrays
overlap = np.intersect1d(x, y)
# Indexes of common elements in each array
loc1 = np.searchsorted(x, overlap)
loc2 = np.searchsorted(y, overlap)
# Result is the zip two 1d numpy arrays into 2d array
return np.dstack((loc1, loc2))[0]
def get_indexes_akopcz(x, y):
' akopcz answer '
return np.array([
[i, j]
for i, nr in enumerate(x)
for j in np.where(nr == y)[0]
])
perfplot.show(
setup = create_arr, # tuple of two 1D random arrays
kernels=[
lambda a: get_indexes_lllrnr101(*a),
lambda a: get_indexes_joostblack(*a),
lambda a: get_indexes_mustafa(*a),
lambda a: get_indexes_darrylg(*a),
lambda a: get_indexes_akopcz(*a),
],
labels=["lllrnr101", "joostblack", "mustafa", "darrylg", "akopcz"],
n_range=[2 ** k for k in range(5, 21)],
xlabel="Array Length",
# More optional arguments with their default values:
# logx="auto", # set to True or False to force scaling
# logy="auto",
equality_check=None, #np.allclose, # set to None to disable "correctness" assertion
# show_progress=True,
# target_time_per_measurement=1.0,
# time_unit="s", # set to one of ("auto", "s", "ms", "us", or "ns") to force plot units
# relative_to=1, # plot the timings relative to one of the measurements
# flops=lambda n: 3*n, # FLOPS plots
)
What you are doing is O(nlogn) which is decent enough.
If you want, you can do it in O(n) by iterating on both arrays with two pointers and since they are sorted, increase the pointer for the array with smaller object.
See below:
x = [1, 2, 8, 11, 15]
y = [1, 8, 15, 17, 20, 21]
def get_indexes(x,y):
ans = []
i=0
j=0
while (i<len(x) and j<len(y)):
if x[i] == y[j]:
ans.append([i,j])
i += 1
j += 1
elif (x[i]<y[j]):
i += 1
else:
j += 1
return ans
print(get_indexes(x,y))
which gives me:
[[0, 0], [2, 1], [4, 2]]
Although, this function will search for all the occurances of x[i] in the y array, if duplicates are not allowed in y it will find x[i] exactly once.
def get_indexes(x, y):
return [
[i, j]
for i, nr in enumerate(x)
for j in np.where(nr == y)[0]
]
You can use numpy.searchsorted:
def get_indexes(x, y):
indexes = []
for idx,val in enumerate(x):
idy = np.searchsorted(y,val)
if y[idy]==val:
indexes.append([idx,idy])
return indexes
One solution is to first look from x's side to see what values are included in y by getting their indices through np.isin and np.flatnonzero, and then use the same procedure from the other side; but instead of giving x entirely, we give only the (already found) intersected elements to gain time:
indices_in_x = np.flatnonzero(np.isin(x, y)) # array([0, 2, 4])
indices_in_y = np.flatnonzero(np.isin(y, x[indices_in_x])) # array([0, 1, 2])
Now you can zip them to get the result:
result = list(zip(indices_in_x, indices_in_y)) # [(0, 0), (2, 1), (4, 2)]
Let's say I have an 2D array of (N, N) shape:
import numpy as np
my_array = np.random.random((N, N))
Now I want to do some computations only on some "cells" of this array, for instance the ones inside the central part of the array. To avoid doing computations on cells I'm not interested in, what I usually do here is create a Boolean mask, in this spirit:
my_mask = np.zeros_like(my_array, bool)
my_mask[40:61,40:61] = True
my_array[my_mask] = some_twisted_computations(my_array[my_mask])
But what if some_twisted_computations() involves values of the neighboring cells if they are inside the mask? Performance-wise, would it be a good idea to create an "adjacency array" with a (len(my_mask), 4) shape, storing the index of 4-connected neighbor cells in the flat my_array[mask] array that I will use in some_twisted_computations()? If yes, what are the efficient options for computing such adjacency array? Should I switch to lower-level langage/other data structures?
My real-worlds arrays shapes are around (1000,1000,1000), the mask concerns only a small subset (~100000) of these values and is of rather complex geometry. I hope my questions make sense...
EDIT: the very dirty and slow solution I've worked out:
wall = mask
i = 0
top_neighbors = []
down_neighbors = []
left_neighbors = []
right_neighbors = []
indices = []
for index, val in np.ndenumerate(wall):
if not val:
continue
indices += [index]
if wall[index[0] + 1, index[1]]:
down_neighbors += [(index[0] + 1, index[1])]
else:
down_neighbors += [i]
if wall[index[0] - 1, index[1]]:
top_neighbors += [(index[0] - 1, index[1])]
else:
top_neighbors += [i]
if wall[index[0], index[1] - 1]:
left_neighbors += [(index[0], index[1] - 1)]
else:
left_neighbors += [i]
if wall[index[0], index[1] + 1]:
right_neighbors += [(index[0], index[1] + 1)]
else:
right_neighbors += [i]
i += 1
top_neighbors = [i if type(i) is int else indices.index(i) for i in top_neighbors]
down_neighbors = [i if type(i) is int else indices.index(i) for i in down_neighbors]
left_neighbors = [i if type(i) is int else indices.index(i) for i in left_neighbors]
right_neighbors = [i if type(i) is int else indices.index(i) for i in right_neighbors]
The best answer will probably depend on the nature of the computations you want to do. For example, if they can be expressed as summations over neighboring pixels, then something like np.convolve or scipy.signal.fftconvolve can be a really nice solution.
For your specific question of efficiently generating arrays of neighbor indices, you might try something like this:
x = np.random.rand(100, 100)
mask = x > 0.9
i, j = np.where(mask)
i_neighbors = i[:, np.newaxis] + [0, 0, -1, 1]
j_neighbors = j[:, np.newaxis] + [-1, 1, 0, 0]
# need to do something with the edge cases
# the best choice will depend on your application
# here we'll change out-of-bounds neighbors to the
# central point itself.
i_neighbors = np.clip(i_neighbors, 0, 99)
j_neighbors = np.clip(j_neighbors, 0, 99)
# compute some vectorized result over the neighbors
# as a concrete example, here we'll do a standard deviation
result = x[i_neighbors, j_neighbors].std(axis=1)
The result is an array of values corresponding to the masked region, containing the standard deviation of neighboring values.
Hopefully that approach will work for whatever specific problem you have in mind!
Edit: given the edited question above, here's how my response can be adapted to generate arrays of indices in a vectorized manner:
x = np.random.rand(100, 100)
mask = x > -0.9
i, j = np.where(mask)
i_neighbors = i[:, np.newaxis] + [0, 0, -1, 1]
j_neighbors = j[:, np.newaxis] + [-1, 1, 0, 0]
i_neighbors = np.clip(i_neighbors, 0, 99)
j_neighbors = np.clip(j_neighbors, 0, 99)
indices = np.zeros(x.shape, dtype=int)
indices[mask] = np.arange(len(i))
neighbor_in_mask = mask[i_neighbors, j_neighbors]
neighbors = np.where(neighbor_in_mask,
indices[i_neighbors, j_neighbors],
np.arange(len(i))[:, None])
left_indices, right_indices, top_indices, bottom_indices = neighbors.T