Tensorflow: Truncating a Tensor Seems to Have No Effect - python

I'm using the tf.data.Dataset API and am trying to truncate a bunch of tensors to length 100. Here's what my dataset looks like:
dataset = tf.data.Dataset.from_tensor_slices(({'reviews': x}, y))
My reviews are just movie reviews (strings), so I perform some preprocessing and map that function on my dataset:
def preprocess(x, y):
# split on whitespace
x['reviews'] = tf.string_split([x['reviews']])
# turn into integers
x['reviews'], y = data_table.lookup(x['reviews']), labels_table.lookup(y)
x['reviews'] = tf.sparse_tensor_to_dense(x['reviews'])
# truncate at length 100
x['reviews'] = x['reviews'][:100]
x['reviews'] = x['reviews'][0]
x['reviews'] = tf.pad(x['reviews'],
paddings=[[100 - tf.shape(x['reviews'])[0], 0]],
mode='CONSTANT',
name='pad_input',
constant_values=0)
return x, y
dataset = dataset.map(preprocess)
However, my code fails with:
tensorflow.python.framework.errors_impl.InvalidArgumentError: Paddings must be non-negative: -140 0
On an input of length 240. So, it seems like my padding step calculates 100 - 240 = -140 and I get this error.
Here's my question: how is this possible, given that I truncate to length 100 with:
x['reviews'] = x['reviews'][:100]
It seems clear that this line isn't having any effect, so I'm trying to understand why. The docs are very clear that this is acceptable syntactic sugar for tf.slice:
Note that tf.Tensor.getitem is typically a more pythonic way to
perform slices, as it allows you to write foo[3:7, :-2] instead of
tf.slice(foo, [3, 0], [4, foo.get_shape()[1]-2]).
Any ideas?
Thanks!

Related

Tensorflow Datasets: Is there a way to only modify a certain percentage of labels?

I'm using the following example to analyse the performance of Computer Vision system depending on the data quality.
Keras Implementation Retinanet: https://keras.io/examples/vision/retinanet/
My goal is to corrupt(stretch, shift) certain percentages (10%,20%,30%) of the total bounding boxes across all images. This means that images should be randomly picked and them some of the bounding boxes corrupted so that in total the target percentage is affected.
I'm using the tensorflow datasets as my training data (e.g. https://www.tensorflow.org/datasets/catalog/kitti).
My basic idea was to generate an array in the size of the total amout of boxes and fill it with 1 (modify box) and 0 (ignore box) and then iterate through all boxes:
random_array = np.concatenate((np.ones(int(error_rate_size*TOTAL_NUMBER_OF_BOXES)+1,dtype=int),np.zeros(int((1-error_rate_size)*TOTAL_NUMBER_OF_BOXES)+1,dtype=int)))
The problem is that the implementation I'm using is heavily relying on graph implementation and specifially on the map function (https://www.tensorflow.org/api_docs/python/tf/data/Dataset#map). I would like to follow this pattern in order to keep the implemented data pipeline.
What I am hopeing to do is to use map function in combination with a global counter so I can loop through the array and modify whenever a condition is given. It should roughly look something like this:
COUNT = 0
def damage_data(box):
scaling_range = 2.0
global COUNT
COUNT += 1
if random_array[COUNT]== 1:
new_box = tf.stack(
[
box[0]*scaling_range*tf.random.uniform(shape=(),minval=0.0,maxval=1.0,dtype=tf.float32,seed=1), # x center
box[1]*scaling_range*tf.random.uniform(shape=(),minval=0.0,maxval=1.0,dtype=tf.float32,seed=2), # y center
box[2]*scaling_range*tf.random.uniform(shape=(),minval=0.0,maxval=1.0,dtype=tf.float32,seed=3), # width,
box[3]*scaling_range*tf.random.uniform(shape=(),minval=0.0,maxval=1.0,dtype=tf.float32,seed=4), # height,
],
axis=-1,)
else:
tf.print("Not Changed")
new_box = tf.stack(
[
box[0],
box[1], # y center
box[2], # width,
box[3], # height,
],
axis=-1,)
return new_box
def damage_data_cross_sequential(image, bbox, class_id):
# bbox format [x_center, y_center, width, height]
bbox = tf.map_fn(damage_data,bbox)
return image, bbox, class_id
train_dataset = train_dataset.map(damage_data_cross_sequential,num_parallel_calls=1)
But using this code the variable COUNT is not incremented globally but rather every map() call starts from the initial value 0. I assume this somehow is caused through the graph implementation and the parallel processes in map().
The question is now if there is any way to globally increase a counter through the map function or if I could extend the given dataset with a unique identifier (e.g. add box[5] = id).
I hope the problem is clear and thanks already! :)
--------------UPDATE 1-------------------------------
The second approach as described by #Lescurel is what I'm trying to do.
Some clarifications about the dataset structure.
The number of boxes per image is not identical.It changes from image to image.
e.g. sample 1: ((x_dim, y_dim, 3), (4,4)), sample 2: ((x_dim, y_dim, 3), (2,4))
For a better understanding the structure can be reproduced with the following:
import tensorflow as tf
import tensorflow_datasets as tfds
import numpy as np
valid_ds = tfds.load('kitti', split='validation') # validation is a smaller set
def select_relevant_info(sample):
image = sample["image"]
bbox = sample["objects"]["bbox"]
class_id = tf.cast(sample["objects"]["type"], dtype=tf.int32)
return image, bbox, class_id
valid_ds = valid_ds.map(select_relevant_info)
for sample in valid_ds.take(1):
print(sample)
For plenty of reasons, using a global state is not a terribly good idea, but it's probably even worse in a concurrent context like this one.
There is at least two other ways of implementing what you want:
using a random sample with a threshold as condition to modify the label
put your random array in the dataset as the condition to modify the label.
I personally prefer the first option, which is simpler.
An example.
Lets generate some random data, and create a tf.Dataset. In that example, the total number of sample is 1000:
imgs = tf.random.uniform((1000, 4, 4))
boxes = tf.ones((1000, 4))
ds = tf.data.Dataset.from_tensor_slices((imgs, boxes))
First option: Random Sample
This function will draw a number uniformly between 0 and 1. If this number is higher than the threshold prob, then nothing happens. Otherwise, we modify the label. In that example, it gives a 0.05% chance of modifying the label.
def change_label_with_prob(label, prob=0.05, scaling_range=2.):
return tf.cond(
tf.random.uniform(()) > prob,
lambda: label,
lambda: label*scaling_range*tf.random.uniform((4,), 0., 1., dtype=tf.float32),
)
You can simply call it with Dataset.map:
new_ds = ds.map(lambda img, box: (img, change_label_with_prob(box)))
Second Option : Pass the condition array around
First, we generate an array filled with our conditions: 1 if we want to modify the array, 0 if not.
# lets set the number to change to 200
N_TO_CHANGE = 200
# randomly generated array with 200 "1" and "800" 0.
cond_array = tf.random.shuffle(
tf.concat([tf.ones((N_TO_CHANGE,),dtype=tf.bool), tf.zeros((1000 - N_TO_CHANGE,),dtype=tf.bool)], axis=0)
)
Then we can create a dataset from that array of conditions, and zip it with our previous dataset:
# creating a dataset from the conditional array
ds_cond = tf.data.Dataset.from_tensor_slices(cond_array)
# zipping the two datasets together
ds_data_and_cond = tf.data.Dataset.zip((ds, ds_cond))
# each element of that dataset is ((img, box), cond)
We can write our function, roughly the same as before:
def change_label_with_cond(label, cond, scaling_range=2.0):
# if true, modifies, do nothing otherwise
return tf.cond(
cond,
lambda: label
* scaling_range
* tf.random.uniform((4,), 0.0, 1.0, dtype=tf.float32),
lambda: label,
)
And then map the function on our new dataset, paying attention to the nested shape of each element of the dataset:
ds_changed_label = ds_data_and_cond.map(
lambda img_and_box, z: (img_and_box[0], change_label_with_cond(img_and_box[1], z))
)
# New dataset has a shape (img, box), same as before the zipping

Applying function over entire parameter space/2d array, Numpy

I'm in one of those weird places, where I know exactly what I want to do. I could easily code it up using for loops. but I'm trying to learn Numpy and I can't formulate how to solve this in Numpy.
I want to have a 2d array or parameter space. All values between 1200 and 1800, and all combinations therein. So [1200, 1200], [1200, 1201], [1200, 1202] .... [1201, 1200], [1201, 1201] etc.
I want to apply a function across this entire parameter space. The function uses a further 2 arrays, which are also values in 1200-1800 range. But they are random values, so these 2 extra arrays are random values in the 1200-1800 range, so [1356, 1689, 1436, ...] and [1768, 1495, 1358, ...] etc. check_array1 and check_array2.
The function needs to move through the parameter space checking a condition, which is basically if x < check_array1 and y < check_array2 then 1 else 0. Where x and y are the each specific point in the 2d parameter space. It needs to check against every value combination in the check arrays. Sum the total, do a comparison to another static value, and return the difference.
Each unique combination in the parameter space grid will then have a unique value associated with it based on how those specific x and y values from the parameter space compare to the 2 check arrays.
Hopefully the above makes, I just can't figure out how to work this into a Numpy friendly problem. Sorry for the wall of text.
Edit: I've written it in more basic Python to better illustrate what I'm trying to do.
check1 = np.random.randint(1200, 1801, 300)
check2 = np.random.randint(1200, 1801, 300)
def check_this_double(i, j, check1, check2):
total = 0
for num in range(0, len(check1)):
if ((i < check1[num]) or (j < check2[num])):
total += 1
return total
outputs = {}
for i in range(1200, 1801):
for j in range(1200, 1801):
outputs[i,j] = check_this_double(i, j, check1, check2)
Edit 2: I believe I have it.
Following from Mountains code creating the p_space and then using np.vectorize on a normal Python fuction.
check1 = np.random.randint(1200, 1801, 300)
check2 = np.random.randint(1200, 1801, 300)
def calc(i, j):
total = np.where(np.logical_or(check1 < i, checks2 < j), 1, 0)
return total.sum()
rate_calv_v = np.vectorize(rate_calc)
final = rate_calv_v(p_space[:, 0], p_space[:, 1])
Feels kind of like cheating :), there must be way to do it without np.vectorize. But this works for me I believe.
I don't fully understand the problem you are trying to solve. I hope the following will
give you a starting point on how numpy can be used. I recommend going through a numpy introductory tutorial.
numpy boolean indexing and vector math can improve speed and reduce the need for loops.
Here is my understanding of the first part of your questions.
import numpy as np
xv, yv = np.meshgrid(np.arange(1200, 1801), np.arange(1200, 1801))
p_space = np.stack((xv, yv), axis=-1) # the 2d array described
# print original values
print(p_space[0,:10,0])
print(p_space[0,-10:,0])
old_shape = p_space.shape
p_space = p_space.reshape(-1, 2) # flatten the array for the compare
check1 = np.random.randint(1200, 1801, len(p_space))
check2 = np.random.randint(1200, 1801, len(p_space))
# you can used this to access and modify values that meet the condition
index_array = np.logical_and(p_space[:, 0] < check1, p_space[:, 1] < check2)
# do some sort of complex math
p_space[index_array] = p_space[index_array] / 2 + 10
# get the sum across the two columns
print(np.sum(p_space, axis=0))
p_space = p_space.reshape(old_shape) # return to the grid shape
# print modified values
print(p_space[0,:10,0]) # likely to be changed based on checks
print(p_space[0,-10:,0]) # unlikely to be changed

Can somebody explain how this one hot encoder method works?

I have gotten this code online that one hot encodes an array of label encoded values. I particularly don't understand the last line. Please help
I initially thought that where ever y is 1, it replaces the value of that index with 1, but, how?
def read_dataset():
df = pd.read_csv("sonar.all-data.csv")
x = df[df.columns[0:60]].values
y = df[df.columns[60]]
encoder = LabelEncoder()
encoder.fit(y)
y = oneHotEncode(y)
return(x, y)
def oneHotEncode(labels):
n_labels = len(labels)
n_unique_labels = len(np.unique(labels))
oneHE = np.zeros((n_labels, n_unique_labels))
oneHE[np.arange(n_labels), labels] = 1
return oneHE
I am expecting to under how this code works but I don't understand that line with np.arange
np.arange() is similar to range() but creates a numpy array. Hence, if you have 10 labels, it returns an array with the consecutive numbers from 0 to 9. This is used to choose the row of the oneHE array (that contains only zeros after initialization). The labels is used for choosing the columns.
So it's just selecting the respective column in all rows and setting the values to 1.

Is it possible to loop when size of a tensor is not known?

I would like to read time series sequence saved in tfrecord format. Each time series has different length. What I want to achieve is to split a long tensor into batch of smaller tensors of requested length. It is very easy to do with numpy arrays and it would look like this:
length = 200
for begin in range(tensor_size-length):
tensor_slice = tf.slice(my_tensor, begin, length)
my_slices.append(tensor_slice)
In such function my problem is: how to get size of a tensor, so that using a loop could be possible?
Below is part of code where example is readed and decoded.
file_queue = tf.train.string_input_producer(tf_files, num_epochs=num_epochs)
reader = tf.TFRecordReader()
_, serialized_records = reader.read(file_queue)
feature_map = {
"speed":tf.FixedLenSequenceFeature([], tf.float32, allow_missing=True),
"battery":tf.FixedLenSequenceFeature([], tf.float32, allow_missing=True)
}
features = tf.parse_single_example(serialized_records, feature_map)
speed = tf.cast(features['speed'], tf.float32)
battery = tf.cast(features['battery'], tf.float32)
speeds = []
batteries = []
#SPLIT TENSOR INTO SMALLER TENSORS
features = tf.train.shuffle_batch([speeds, batteries],
batch_size=batch_size,
capacity=5000,
num_threads=4,
min_after_dequeue=1)
return features
You cannot loop through a tensor like that in Python. You can use tf.while_loop, although it is generally avoided unless it is really the only way to achieve what you want, since it tends to be slow. In your case, you can get the result you want without looping, for example using tf.gather:
length = 200
features = ...
# Number of elements
n = tf.shape(features)[0]
# Index from zero to number of subtensors
split_idx = tf.range(n - length + 1)
# Index from zero to subtensor length
length_idx = tf.range(length)
# Indices for gather; each row advances one position, like a "rolling window"
gather_idx = split_idx[:, tf.newaxis] + length_idx
# Gather result
features_split = tf.gather(features, gather_idx)

Using Mann Kendall in python with a lot of data

I have a set of 46 years worth of rainfall data. It's in the form of 46 numpy arrays each with a shape of 145, 192, so each year is a different array of maximum rainfall data at each lat and lon coordinate in the given model.
I need to create a global map of tau values by doing an M-K test (Mann-Kendall) for each coordinate over the 46 years.
I'm still learning python, so I've been having trouble finding a way to go through all the data in a simple way that doesn't involve me making 27840 new arrays for each coordinate.
So far I've looked into how to use scipy.stats.kendalltau and using the definition from here: https://github.com/mps9506/Mann-Kendall-Trend
EDIT:
To clarify and add a little more detail, I need to perform a test on for each coordinate and not just each file individually. For example, for the first M-K test, I would want my x=46 and I would want y=data1[0,0],data2[0,0],data3[0,0]...data46[0,0]. Then to repeat this process for every single coordinate in each array. In total the M-K test would be done 27840 times and leave me with 27840 tau values that I can then plot on a global map.
EDIT 2:
I'm now running into a different problem. Going off of the suggested code, I have the following:
for i in range(145):
for j in range(192):
out[i,j] = mk_test(yrmax[:,i,j],alpha=0.05)
print out
I used numpy.stack to stack all 46 arrays into a single array (yrmax) with shape: (46L, 145L, 192L) I've tested it out and it calculates p and tau correctly if I change the code from out[i,j] to just out. However, doing this messes up the for loop so it only takes the results from the last coordinate in stead of all of them. And if I leave the code as it is above, I get the error: TypeError: list indices must be integers, not tuple
My first guess was that it has to do with mk_test and how the information is supposed to be returned in the definition. So I've tried altering the code from the link above to change how the data is returned, but I keep getting errors relating back to tuples. So now I'm not sure where it's going wrong and how to fix it.
EDIT 3:
One more clarification I thought I should add. I've already modified the definition in the link so it returns only the two number values I want for creating maps, p and z.
I don't think this is as big an ask as you may imagine. From your description it sounds like you don't actually want the scipy kendalltau, but the function in the repository you posted. Here is a little example I set up:
from time import time
import numpy as np
from mk_test import mk_test
data = np.array([np.random.rand(145, 192) for _ in range(46)])
mk_res = np.empty((145, 192), dtype=object)
start = time()
for i in range(145):
for j in range(192):
out[i, j] = mk_test(data[:, i, j], alpha=0.05)
print(f'Elapsed Time: {time() - start} s')
Elapsed Time: 35.21990394592285 s
My system is a MacBook Pro 2.7 GHz Intel Core I7 with 16 GB Ram so nothing special.
Each entry in the mk_res array (shape 145, 192) corresponds to one of your coordinate points and contains an entry like so:
array(['no trend', 'False', '0.894546014835', '0.132554125342'], dtype='<U14')
One thing that might be useful would be to modify the code in mk_test.py to return all numerical values. So instead of 'no trend'/'positive'/'negative' you could return 0/1/-1, and 1/0 for True/False and then you wouldn't have to worry about the whole object array type. I don't know what kind of analysis you might want to do downstream but I imagine that would preemptively circumvent any headaches.
Thanks to the answers provided and some work I was able to work out a solution that I'll provide here for anyone else that needs to use the Mann-Kendall test for data analysis.
The first thing I needed to do was flatten the original array I had into a 1D array. I know there is probably an easier way to go about doing this, but I ultimately used the following code based on code Grr suggested using.
`x = 46
out1 = np.empty(x)
out = np.empty((0))
for i in range(146):
for j in range(193):
out1 = yrmax[:,i,j]
out = np.append(out, out1, axis=0) `
Then I reshaped the resulting array (out) as follows:
out2 = np.reshape(out,(27840,46))
I did this so my data would be in a format compatible with scipy.stats.kendalltau 27840 is the total number of values I have at every coordinate that will be on my map (i.e. it's just 145*192) and the 46 is the number of years the data spans.
I then used the following loop I modified from Grr's code to find Kendall-tau and it's respective p-value at each latitude and longitude over the 46 year period.
`x = range(46)
y = np.zeros((0))
for j in range(27840):
b = sc.stats.kendalltau(x,out2[j,:])
y = np.append(y, b, axis=0)`
Finally, I reshaped the data one for time as shown:newdata = np.reshape(y,(145,192,2)) so the final array is in a suitable format to be used to create a global map of both tau and p-values.
Thanks everyone for the assistance!
Depending on your situation, it might just be easiest to make the arrays.
You won't really need them all in memory at once (not that it sounds like a terrible amount of data). Something like this only has to deal with one "copied out" coordinate trend at once:
SIZE = (145,192)
year_matrices = load_years() # list of one 145x192 arrays per year
result_matrix = numpy.zeros(SIZE)
for x in range(SIZE[0]):
for y in range(SIZE[1]):
coord_trend = map(lambda d: d[x][y], year_matrices)
result_matrix[x][y] = analyze_trend(coord_trend)
print result_matrix
Now, there are things like itertools.izip that could help you if you really want to avoid actually copying the data.
Here's a concrete example of how Python's "zip" might works with data like yours (although as if you'd used ndarray.flatten on each year):
year_arrays = [
['y0_coord0_val', 'y0_coord1_val', 'y0_coord2_val', 'y0_coord2_val'],
['y1_coord0_val', 'y1_coord1_val', 'y1_coord2_val', 'y1_coord2_val'],
['y2_coord0_val', 'y2_coord1_val', 'y2_coord2_val', 'y2_coord2_val'],
]
assert len(year_arrays) == 3
assert len(year_arrays[0]) == 4
coord_arrays = zip(*year_arrays) # i.e. `zip(year_arrays[0], year_arrays[1], year_arrays[2])`
# original data is essentially transposed
assert len(coord_arrays) == 4
assert len(coord_arrays[0]) == 3
assert coord_arrays[0] == ('y0_coord0_val', 'y1_coord0_val', 'y2_coord0_val', 'y3_coord0_val')
assert coord_arrays[1] == ('y0_coord1_val', 'y1_coord1_val', 'y2_coord1_val', 'y3_coord1_val')
assert coord_arrays[2] == ('y0_coord2_val', 'y1_coord2_val', 'y2_coord2_val', 'y3_coord2_val')
assert coord_arrays[3] == ('y0_coord2_val', 'y1_coord2_val', 'y2_coord2_val', 'y3_coord2_val')
flat_result = map(analyze_trend, coord_arrays)
The example above still copies the data (and all at once, rather than a coordinate at a time!) but hopefully shows what's going on.
Now, if you replace zip with itertools.izip and map with itertools.map then the copies needn't occur — itertools wraps the original arrays and keeps track of where it should be fetching values from internally.
There's a catch, though: to take advantage itertools you to access the data only sequentially (i.e. through iteration). In your case, it looks like the code at https://github.com/mps9506/Mann-Kendall-Trend/blob/master/mk_test.py might not be compatible with that. (I haven't reviewed the algorithm itself to see if it could be.)
Also please note that in the example I've glossed over the numpy ndarray stuff and just show flat coordinate arrays. It looks like numpy has some of it's own options for handling this instead of itertools, e.g. this answer says "Taking the transpose of an array does not make a copy". Your question was somewhat general, so I've tried to give some general tips as to ways one might deal with larger data in Python.
I ran into the same task and have managed to come up with a vectorized solution using numpy and scipy.
The formula are the same as in this page: https://vsp.pnnl.gov/help/Vsample/Design_Trend_Mann_Kendall.htm.
The trickiest part is to work out the adjustment for the tied values. I modified the code as in this answer to compute the number of tied values for each record, in a vectorized manner.
Below are the 2 functions:
import copy
import numpy as np
from scipy.stats import norm
def countTies(x):
'''Count number of ties in rows of a 2D matrix
Args:
x (ndarray): 2d matrix.
Returns:
result (ndarray): 2d matrix with same shape as <x>. In each
row, the number of ties are inserted at (not really) arbitary
locations.
The locations of tie numbers in are not important, since
they will be subsequently put into a formula of sum(t*(t-1)*(2t+5)).
Inspired by: https://stackoverflow.com/a/24892274/2005415.
'''
if np.ndim(x) != 2:
raise Exception("<x> should be 2D.")
m, n = x.shape
pad0 = np.zeros([m, 1]).astype('int')
x = copy.deepcopy(x)
x.sort(axis=1)
diff = np.diff(x, axis=1)
cated = np.concatenate([pad0, np.where(diff==0, 1, 0), pad0], axis=1)
absdiff = np.abs(np.diff(cated, axis=1))
rows, cols = np.where(absdiff==1)
rows = rows.reshape(-1, 2)[:, 0]
cols = cols.reshape(-1, 2)
counts = np.diff(cols, axis=1)+1
result = np.zeros(x.shape).astype('int')
result[rows, cols[:,1]] = counts.flatten()
return result
def MannKendallTrend2D(data, tails=2, axis=0, verbose=True):
'''Vectorized Mann-Kendall tests on 2D matrix rows/columns
Args:
data (ndarray): 2d array with shape (m, n).
Keyword Args:
tails (int): 1 for 1-tail, 2 for 2-tail test.
axis (int): 0: test trend in each column. 1: test trend in each
row.
Returns:
z (ndarray): If <axis> = 0, 1d array with length <n>, standard scores
corresponding to data in each row in <x>.
If <axis> = 1, 1d array with length <m>, standard scores
corresponding to data in each column in <x>.
p (ndarray): p-values corresponding to <z>.
'''
if np.ndim(data) != 2:
raise Exception("<data> should be 2D.")
# alway put records in rows and do M-K test on each row
if axis == 0:
data = data.T
m, n = data.shape
mask = np.triu(np.ones([n, n])).astype('int')
mask = np.repeat(mask[None,...], m, axis=0)
s = np.sign(data[:,None,:]-data[:,:,None]).astype('int')
s = (s * mask).sum(axis=(1,2))
#--------------------Count ties--------------------
counts = countTies(data)
tt = counts * (counts - 1) * (2*counts + 5)
tt = tt.sum(axis=1)
#-----------------Sample Gaussian-----------------
var = (n * (n-1) * (2*n+5) - tt) / 18.
eps = 1e-8 # avoid dividing 0
z = (s - np.sign(s)) / (np.sqrt(var) + eps)
p = norm.cdf(z)
p = np.where(p>0.5, 1-p, p)
if tails==2:
p=p*2
return z, p
I assume your data come in the layout of (time, latitude, longitude), and you are examining the temporal trend for each lat/lon cell.
To simulate this task, I synthesized a sample data array of shape (50, 145, 192). The 50 time points are taken from Example 5.9 of the book Wilks 2011, Statistical methods in the atmospheric sciences. And then I simply duplicated the same time series 27840 times to make it (50, 145, 192).
Below is the computation:
x = np.array([0.44,1.18,2.69,2.08,3.66,1.72,2.82,0.72,1.46,1.30,1.35,0.54,\
2.74,1.13,2.50,1.72,2.27,2.82,1.98,2.44,2.53,2.00,1.12,2.13,1.36,\
4.9,2.94,1.75,1.69,1.88,1.31,1.76,2.17,2.38,1.16,1.39,1.36,\
1.03,1.11,1.35,1.44,1.84,1.69,3.,1.36,6.37,4.55,0.52,0.87,1.51])
# create a big cube with shape: (T, Y, X)
arr = np.zeros([len(x), 145, 192])
for i in range(arr.shape[1]):
for j in range(arr.shape[2]):
arr[:, i, j] = x
print(arr.shape)
# re-arrange into tabular layout: (Y*X, T)
arr = np.transpose(arr, [1, 2, 0])
arr = arr.reshape(-1, len(x))
print(arr.shape)
import time
t1 = time.time()
z, p = MannKendallTrend2D(arr, tails=2, axis=1)
p = p.reshape(145, 192)
t2 = time.time()
print('time =', t2-t1)
The p-value for that sample time series is 0.63341565, which I have validated against the pymannkendall module result. Since arr contains merely duplicated copies of x, the resultant p is a 2d array of size (145, 192), with all 0.63341565.
And it took me only 1.28 seconds to compute that.

Categories

Resources