Related
I have a set of 46 years worth of rainfall data. It's in the form of 46 numpy arrays each with a shape of 145, 192, so each year is a different array of maximum rainfall data at each lat and lon coordinate in the given model.
I need to create a global map of tau values by doing an M-K test (Mann-Kendall) for each coordinate over the 46 years.
I'm still learning python, so I've been having trouble finding a way to go through all the data in a simple way that doesn't involve me making 27840 new arrays for each coordinate.
So far I've looked into how to use scipy.stats.kendalltau and using the definition from here: https://github.com/mps9506/Mann-Kendall-Trend
EDIT:
To clarify and add a little more detail, I need to perform a test on for each coordinate and not just each file individually. For example, for the first M-K test, I would want my x=46 and I would want y=data1[0,0],data2[0,0],data3[0,0]...data46[0,0]. Then to repeat this process for every single coordinate in each array. In total the M-K test would be done 27840 times and leave me with 27840 tau values that I can then plot on a global map.
EDIT 2:
I'm now running into a different problem. Going off of the suggested code, I have the following:
for i in range(145):
for j in range(192):
out[i,j] = mk_test(yrmax[:,i,j],alpha=0.05)
print out
I used numpy.stack to stack all 46 arrays into a single array (yrmax) with shape: (46L, 145L, 192L) I've tested it out and it calculates p and tau correctly if I change the code from out[i,j] to just out. However, doing this messes up the for loop so it only takes the results from the last coordinate in stead of all of them. And if I leave the code as it is above, I get the error: TypeError: list indices must be integers, not tuple
My first guess was that it has to do with mk_test and how the information is supposed to be returned in the definition. So I've tried altering the code from the link above to change how the data is returned, but I keep getting errors relating back to tuples. So now I'm not sure where it's going wrong and how to fix it.
EDIT 3:
One more clarification I thought I should add. I've already modified the definition in the link so it returns only the two number values I want for creating maps, p and z.
I don't think this is as big an ask as you may imagine. From your description it sounds like you don't actually want the scipy kendalltau, but the function in the repository you posted. Here is a little example I set up:
from time import time
import numpy as np
from mk_test import mk_test
data = np.array([np.random.rand(145, 192) for _ in range(46)])
mk_res = np.empty((145, 192), dtype=object)
start = time()
for i in range(145):
for j in range(192):
out[i, j] = mk_test(data[:, i, j], alpha=0.05)
print(f'Elapsed Time: {time() - start} s')
Elapsed Time: 35.21990394592285 s
My system is a MacBook Pro 2.7 GHz Intel Core I7 with 16 GB Ram so nothing special.
Each entry in the mk_res array (shape 145, 192) corresponds to one of your coordinate points and contains an entry like so:
array(['no trend', 'False', '0.894546014835', '0.132554125342'], dtype='<U14')
One thing that might be useful would be to modify the code in mk_test.py to return all numerical values. So instead of 'no trend'/'positive'/'negative' you could return 0/1/-1, and 1/0 for True/False and then you wouldn't have to worry about the whole object array type. I don't know what kind of analysis you might want to do downstream but I imagine that would preemptively circumvent any headaches.
Thanks to the answers provided and some work I was able to work out a solution that I'll provide here for anyone else that needs to use the Mann-Kendall test for data analysis.
The first thing I needed to do was flatten the original array I had into a 1D array. I know there is probably an easier way to go about doing this, but I ultimately used the following code based on code Grr suggested using.
`x = 46
out1 = np.empty(x)
out = np.empty((0))
for i in range(146):
for j in range(193):
out1 = yrmax[:,i,j]
out = np.append(out, out1, axis=0) `
Then I reshaped the resulting array (out) as follows:
out2 = np.reshape(out,(27840,46))
I did this so my data would be in a format compatible with scipy.stats.kendalltau 27840 is the total number of values I have at every coordinate that will be on my map (i.e. it's just 145*192) and the 46 is the number of years the data spans.
I then used the following loop I modified from Grr's code to find Kendall-tau and it's respective p-value at each latitude and longitude over the 46 year period.
`x = range(46)
y = np.zeros((0))
for j in range(27840):
b = sc.stats.kendalltau(x,out2[j,:])
y = np.append(y, b, axis=0)`
Finally, I reshaped the data one for time as shown:newdata = np.reshape(y,(145,192,2)) so the final array is in a suitable format to be used to create a global map of both tau and p-values.
Thanks everyone for the assistance!
Depending on your situation, it might just be easiest to make the arrays.
You won't really need them all in memory at once (not that it sounds like a terrible amount of data). Something like this only has to deal with one "copied out" coordinate trend at once:
SIZE = (145,192)
year_matrices = load_years() # list of one 145x192 arrays per year
result_matrix = numpy.zeros(SIZE)
for x in range(SIZE[0]):
for y in range(SIZE[1]):
coord_trend = map(lambda d: d[x][y], year_matrices)
result_matrix[x][y] = analyze_trend(coord_trend)
print result_matrix
Now, there are things like itertools.izip that could help you if you really want to avoid actually copying the data.
Here's a concrete example of how Python's "zip" might works with data like yours (although as if you'd used ndarray.flatten on each year):
year_arrays = [
['y0_coord0_val', 'y0_coord1_val', 'y0_coord2_val', 'y0_coord2_val'],
['y1_coord0_val', 'y1_coord1_val', 'y1_coord2_val', 'y1_coord2_val'],
['y2_coord0_val', 'y2_coord1_val', 'y2_coord2_val', 'y2_coord2_val'],
]
assert len(year_arrays) == 3
assert len(year_arrays[0]) == 4
coord_arrays = zip(*year_arrays) # i.e. `zip(year_arrays[0], year_arrays[1], year_arrays[2])`
# original data is essentially transposed
assert len(coord_arrays) == 4
assert len(coord_arrays[0]) == 3
assert coord_arrays[0] == ('y0_coord0_val', 'y1_coord0_val', 'y2_coord0_val', 'y3_coord0_val')
assert coord_arrays[1] == ('y0_coord1_val', 'y1_coord1_val', 'y2_coord1_val', 'y3_coord1_val')
assert coord_arrays[2] == ('y0_coord2_val', 'y1_coord2_val', 'y2_coord2_val', 'y3_coord2_val')
assert coord_arrays[3] == ('y0_coord2_val', 'y1_coord2_val', 'y2_coord2_val', 'y3_coord2_val')
flat_result = map(analyze_trend, coord_arrays)
The example above still copies the data (and all at once, rather than a coordinate at a time!) but hopefully shows what's going on.
Now, if you replace zip with itertools.izip and map with itertools.map then the copies needn't occur — itertools wraps the original arrays and keeps track of where it should be fetching values from internally.
There's a catch, though: to take advantage itertools you to access the data only sequentially (i.e. through iteration). In your case, it looks like the code at https://github.com/mps9506/Mann-Kendall-Trend/blob/master/mk_test.py might not be compatible with that. (I haven't reviewed the algorithm itself to see if it could be.)
Also please note that in the example I've glossed over the numpy ndarray stuff and just show flat coordinate arrays. It looks like numpy has some of it's own options for handling this instead of itertools, e.g. this answer says "Taking the transpose of an array does not make a copy". Your question was somewhat general, so I've tried to give some general tips as to ways one might deal with larger data in Python.
I ran into the same task and have managed to come up with a vectorized solution using numpy and scipy.
The formula are the same as in this page: https://vsp.pnnl.gov/help/Vsample/Design_Trend_Mann_Kendall.htm.
The trickiest part is to work out the adjustment for the tied values. I modified the code as in this answer to compute the number of tied values for each record, in a vectorized manner.
Below are the 2 functions:
import copy
import numpy as np
from scipy.stats import norm
def countTies(x):
'''Count number of ties in rows of a 2D matrix
Args:
x (ndarray): 2d matrix.
Returns:
result (ndarray): 2d matrix with same shape as <x>. In each
row, the number of ties are inserted at (not really) arbitary
locations.
The locations of tie numbers in are not important, since
they will be subsequently put into a formula of sum(t*(t-1)*(2t+5)).
Inspired by: https://stackoverflow.com/a/24892274/2005415.
'''
if np.ndim(x) != 2:
raise Exception("<x> should be 2D.")
m, n = x.shape
pad0 = np.zeros([m, 1]).astype('int')
x = copy.deepcopy(x)
x.sort(axis=1)
diff = np.diff(x, axis=1)
cated = np.concatenate([pad0, np.where(diff==0, 1, 0), pad0], axis=1)
absdiff = np.abs(np.diff(cated, axis=1))
rows, cols = np.where(absdiff==1)
rows = rows.reshape(-1, 2)[:, 0]
cols = cols.reshape(-1, 2)
counts = np.diff(cols, axis=1)+1
result = np.zeros(x.shape).astype('int')
result[rows, cols[:,1]] = counts.flatten()
return result
def MannKendallTrend2D(data, tails=2, axis=0, verbose=True):
'''Vectorized Mann-Kendall tests on 2D matrix rows/columns
Args:
data (ndarray): 2d array with shape (m, n).
Keyword Args:
tails (int): 1 for 1-tail, 2 for 2-tail test.
axis (int): 0: test trend in each column. 1: test trend in each
row.
Returns:
z (ndarray): If <axis> = 0, 1d array with length <n>, standard scores
corresponding to data in each row in <x>.
If <axis> = 1, 1d array with length <m>, standard scores
corresponding to data in each column in <x>.
p (ndarray): p-values corresponding to <z>.
'''
if np.ndim(data) != 2:
raise Exception("<data> should be 2D.")
# alway put records in rows and do M-K test on each row
if axis == 0:
data = data.T
m, n = data.shape
mask = np.triu(np.ones([n, n])).astype('int')
mask = np.repeat(mask[None,...], m, axis=0)
s = np.sign(data[:,None,:]-data[:,:,None]).astype('int')
s = (s * mask).sum(axis=(1,2))
#--------------------Count ties--------------------
counts = countTies(data)
tt = counts * (counts - 1) * (2*counts + 5)
tt = tt.sum(axis=1)
#-----------------Sample Gaussian-----------------
var = (n * (n-1) * (2*n+5) - tt) / 18.
eps = 1e-8 # avoid dividing 0
z = (s - np.sign(s)) / (np.sqrt(var) + eps)
p = norm.cdf(z)
p = np.where(p>0.5, 1-p, p)
if tails==2:
p=p*2
return z, p
I assume your data come in the layout of (time, latitude, longitude), and you are examining the temporal trend for each lat/lon cell.
To simulate this task, I synthesized a sample data array of shape (50, 145, 192). The 50 time points are taken from Example 5.9 of the book Wilks 2011, Statistical methods in the atmospheric sciences. And then I simply duplicated the same time series 27840 times to make it (50, 145, 192).
Below is the computation:
x = np.array([0.44,1.18,2.69,2.08,3.66,1.72,2.82,0.72,1.46,1.30,1.35,0.54,\
2.74,1.13,2.50,1.72,2.27,2.82,1.98,2.44,2.53,2.00,1.12,2.13,1.36,\
4.9,2.94,1.75,1.69,1.88,1.31,1.76,2.17,2.38,1.16,1.39,1.36,\
1.03,1.11,1.35,1.44,1.84,1.69,3.,1.36,6.37,4.55,0.52,0.87,1.51])
# create a big cube with shape: (T, Y, X)
arr = np.zeros([len(x), 145, 192])
for i in range(arr.shape[1]):
for j in range(arr.shape[2]):
arr[:, i, j] = x
print(arr.shape)
# re-arrange into tabular layout: (Y*X, T)
arr = np.transpose(arr, [1, 2, 0])
arr = arr.reshape(-1, len(x))
print(arr.shape)
import time
t1 = time.time()
z, p = MannKendallTrend2D(arr, tails=2, axis=1)
p = p.reshape(145, 192)
t2 = time.time()
print('time =', t2-t1)
The p-value for that sample time series is 0.63341565, which I have validated against the pymannkendall module result. Since arr contains merely duplicated copies of x, the resultant p is a 2d array of size (145, 192), with all 0.63341565.
And it took me only 1.28 seconds to compute that.
i am new to python,trying to learn machine learning in python.i have tried to write a neural network from scratch with one hidden layer on the famous iris dataset.this is a three class classifier with out put as one hot vectors.i have also taken help from already written algos for help.for instance i used the same training set as my testing set.
it is a huge code to go through,i would like you to tell me, that how do we subtract 'y' output( which is one hot vector) of dimensions (150,3) and my out y softmax will be of vector (150,21).this is my biggest problem.i tried to look online everyone have used this method but since i am weak in python i don't understand it.this is the line of code delta3[range(m1), y] -= 1
arrays used as indices must be of integer (or boolean) type if m1 is sie of(150)
and if i give size m1(150,3) then
delta3[range(m1), y] -= 1
TypeError: range() integer end argument expected, got tuple.
remember m1=150
my y vector=150,3
softmax=150,21
my code is
#labels or classes
#1=iris-setosa
#2=iris-versicolor
#0=iris-virginica
#features
#sepallength
#sepalwidth
#petallengthcm
#petalwidth
import pandas as pd
import matplotlib.pyplot as plt
import csv
import numpy as np
df=pd.read_csv('Iris.csv')
df.convert_objects(convert_numeric=True)
df.fillna(0,inplace=True)
df.drop(['Id'],1,inplace=True)
#function to convert three labels into values 0,1,2
def handle_non_numericaldata(df):
columns=df.columns.values
for column in columns:
text_digit_vals={}
def convert_to_int(val):
return text_digit_vals[val]
if df[column].dtype!=np.int64 and df[column].dtype!=np.float:
column_contents=df[column].values.tolist()
unique_elements=set(column_contents)
x=0
for unique in unique_elements:
if unique not in text_digit_vals:
text_digit_vals[unique]=x
x+=1
df[column]=list(map(convert_to_int,df[column]))
return(df)
handle_non_numericaldata(df)
x=np.array(df.drop(['Species'],1).astype(float))
c=np.array(df['Species'])
n_values=(np.max(c)+1)
y=(np.eye(n_values)[c])
m1=np.size(c)
theta=np.ones(shape=(4,1))
theta2=np.ones(shape=(1,21))
#no of examples "m"
#learning rate alpha
alpha=0.01
#regularization parameter
lamda=0.01
for i in range(1,1000):
z1=np.dot(x,theta)
sigma=1/(1+np.exp(-z1))
#activation layer 2.
a2=sigma
z2=np.dot(a2,theta2)
probs=np.exp(z2)
softmax=probs/np.sum(probs,axis=1,keepdims=True)
delta3=softmax
delta3[range(m1), y] -= 1
A2=np.transpose(a2)
dw2 = (A2).dot(delta3)
W2=np.transpose(theta2)
delta2=delta3.dot(W2)*sigma*(1-sigma)
X2=np.transpose(x)
dw1=np.dot(X2,delta2)
dw2=dw2-lamda*theta2
dw1=dw1-lamda*theta
theta =theta -alpha* dw1
theta2= theta2-alpha * dw2
correct_logprobs=0
correct_logprobs=correct_logprobs-np.log(probs[range(m1),y])
data_loss=np.sum(correct_logprobs)
data_loss+=lamda/2*(np.sum(np.square(theta))+ np.square(theta2))
loss=1./m1*data_loss
if 1000%i==0:
print("loss after iteration%i:%f",loss)
final1=x.dot(theta)
sigma=1/(1+np.exp(-final1))
z2=sigma.dot(theta2)
exp_scores=np.exp(z2)
probs=exp_scores/np.sum(exp_scores,axis=1,keepdims=True)
print(np.argmax(probs,axis=1))
In Python range generates a tuple of numbers from x to y with range(x, y). If you generate something like range(10) then it is the same as (0, 1, 2, 3, 4, 5, 6, 7, 8, 9). Lists in Python need an integer index such as list[0] or list[4], not list[0, 4], however, there is a built-in thing in Python that allows access from index x to index y in a list here is the syntax: list[0:4]. This will return every value from 0 to 3 in the list. Such as if a list is list = [0,10,3,4,12,5,3] then list[0:4] will return [0,10,3,4].
Try taking a look at list data structures in Python on the Python Docs. As well as Understanding Generators in Python.
I think what your looking for is something like this: delta3 = [[z-1 for z in delta3[x:y]] for x in range(m1)]. This list comprehension uses two generations both, [x-1 for x in l], which subtracts one from every element in the list, and [l[x:y] for x in range(m)], which generates a list of lists with values through x to y in a range of m. Though I'm not sure I understand what your end goal is, fully.
What is a Neural Network?
The term ‘Neural’ has origin from the human (animal) nervous system’s basic functional unit ‘neuron’ or nerve cells present in the brain and other parts of the human (animal) body. A neural network is a group of algorithms that certify the underlying relationship in a set of data similar to the human brain. The neural network helps to change the input so that the network gives the best result without redesigning the output procedure
Now in code Example:_
import numpy as np
#assign input values
input_value=np.array([[0.26,0.77,0.25],[0.42,0.8,0.25],[0.56,0.53,0.25],[0.29,0.79,0.25]])
input_value.shape
#assign output values
output=np.array([0.644045,0.651730,0.707523,0.644395])
output=output.reshape(4,1)
output
#assign weights
weights=np.array([[0.1],[0.1],[0.1]])
weights.shape
weights
#add bias
bias=0.3
#activation function
def sigmoid_func(x):
return 1/(1+np.exp(-x))
#derivative of sigmoid function
def der(x):
return sigmoid_func(x)*(1-sigmoid_func(x))
#updating weights
for epochs in range(10000):
input_arr=input_value
#print(input_arr)
weighted_sum=np.dot(input_arr,weights)+bias
### CALCULATION OF PRE ACTIVATION FUNCTION
first_output=sigmoid_func(weighted_sum)
#print(first_output)
error=first_output - output
#print(error)
total_error=np.square(np.subtract(first_output,output)).mean()
#print total error
first_der=error
second_der=der(first_output)
derivative=first_der*second_der
t_input=input_value.T
final_derivative=np.dot(t_input,derivative)
#update Weigths
weights=weights-0.05*final_derivative
#update bias
for i in derivative:
bias=bias-0.05*i
print(weights)
print(bias)
#prediction for 1st item
pred=np.array([0.26,0.77,0.25])
result=np.dot(pred,weights)+bias
res=sigmoid_func(result)
print(res)
#prediction for 2nd item
pred=np.array([0.42,0.8,0.25])
result=np.dot(pred,weights)+bias
res=sigmoid_func(result)
print(res)
#prediction for 3rd item
pred=np.array([0.56,0.53,0.25])
result=np.dot(pred,weights)+bias
res=sigmoid_func(result)
print(res)
#prediction for 4th item
pred=np.array([0.29,0.79,0.25])
result=np.dot(pred,weights)+bias
res=sigmoid_func(result)
print(res)
i`m having a class called numerical methods, where we learn how to write programs for certain problems in physics. We had to write 4 programs which could solve ODEs (implicit/explicit euler, velocity-verlet, implicit midpoint rule), now we have to calculate the error by using |y_N - y(T)|. We already have a template which we need to fill out.
This is the code which we have to complete.
def ex2_d():
T = 0.2
y0 = np.array([0.3, 0.0])
all_methods = [explicit_euler, implicit_euler, implicit_mid_point, velocity_verlet]
all_rhs = 3*[pendulum_rhs] + [pendulum_verlet_rhs]
resolutions = 2**np.arange(4, 11)
_, y_exact = ode45(pendulum_rhs, (0.0, T), y0, reltol=1e-12)
for method, rhs in zip(all_methods, all_rhs):
error = np.empty(resolutions.size)
for k, N in enumerate(resolutions):
# TODO: Berechen Sie die Lösung und den Fehler
error[k] = np.absolute(methode())
rate = convergence_rate(error, resolutions)
print(method)
print("rate: " + str(rate) + "\n")
The only thing I need to fill out is the TODO part. But I don`t understand, the for loop, which is looping over k and N in enumerate(resolution), and why is the resolution array declared as it is anyways?
Thank you in advance for your help!
In numerically solving an ODE, you want to have doubling resolutions (halving step sizes), to find the convergence rate, using the standard method:
(u_h - u_(h/2))/(u_(h/2) - u_(h/4)) = 2^p + O(h)
with u_h the numerical solution at a step h, u_(h/2) the solution with a step h/2 (e.g. double resolution) and u_(h/4) the solution with a step h/4 (e.g. again double resolution). The order of the error is p, which gives a convergence rate of h^p
This is why the resolutions are declared as 2**np.arange(4,11), which gives[ 16, 32, 64, 128, 256, 512, 1024]`. (You can use other grid sizes, which will change the formula accordingly. For more information, see this.
To store the errors in a list, you need the corresponding indices of the resolutions, which is why enumerate is used:
enumerate(resolutions) -> [(0,16), (1,32), (2,64), (3,128), (4,256), (5,512), (6,1024)]
which is unpacked by the for loop:
iteration k N
1 0 16
2 1 32
etc.
The aim of this excercise is to compare different methods for solving the differential equation given by pendulum_rhs.
The quantity by which the comparison takes place is the convergence rate. In order to determine this rate you need to solve the DE with variing resolution (of the underlying grid) and compute the error for every resolution.
The resolutions to use are given: resolutions =[16, 32, 64, ...].
So for a given method method, you iterate over the resolutions:
for k in range(len(resolutions)):
N = resolutions[k]
# calculate the result using N
result = method(..., N, ...)
#store it in an array called
error[k] = np.abs(y_exact - result)
I am aware of scipy.solve_bvp but it requires that you interpolate your variables which I do not want to do.
I have a boundary value problem of the following form:
y1'(x) = -c1*f1(x)*f2(x)*y2(x) - f3(x)
y2'(x) = f4(x)*y1 + f1(x)*y2(x)
y1(x=0)=0, y2(x=1)=0
I have values for x=[0, 0.0001, 0.025, 0.3, ... 0.9999999, 1] on a non-uniform grid and values for all of the variables/functions at only those values of x.
How can I solve this BVP?
This is a new function, and I don't have it on my scipy version (0.17), but I found the source in scipy/scipy/integrate/_bvp.py (github).
The relevant pull request is https://github.com/scipy/scipy/pull/6025, last April.
It is based on a paper and MATLAB implementation,
J. Kierzenka, L. F. Shampine, "A BVP Solver Based on Residual
Control and the Maltab PSE", ACM Trans. Math. Softw., Vol. 27,
Number 3, pp. 299-316, 2001.
The x mesh handling appears to be:
while True:
....
solve_newton
....
insert_1, = np.nonzero((rms_res > tol) & (rms_res < 100 * tol))
insert_2, = np.nonzero(rms_res >= 100 * tol)
nodes_added = insert_1.shape[0] + 2 * insert_2.shape[0]
if m + nodes_added > max_nodes:
status = 1
if verbose == 2:
nodes_added = "({})".format(nodes_added)
print_iteration_progress(iteration, max_rms_res, m,
nodes_added)
...
if nodes_added > 0:
x = modify_mesh(x, insert_1, insert_2)
h = np.diff(x)
y = sol(x)
where modify_mesh add nodes to x based on:
insert_1 : ndarray
Intervals to each insert 1 new node in the middle.
insert_2 : ndarray
Intervals to each insert 2 new nodes, such that divide an interval
into 3 equal parts.
From this I deduce that
you can track the addition of nodes with the verbose parameter
nodes are added, but not removed. So the out mesh should include all of your input points.
I assume nodes are added to improve resolution in certain segments of the problem
This is based on reading the code, and not verified with test code. You may be the only person to be asking about this function on SO, and one of the few to have actually used it.
I'm new to Python. I've done this particular task before in MATLAB, and I'm trying to get the hang of the syntax and particular behaviour of Python, as I'll be using this language much more in future.
The task: I am taking 43,200 single data points (integers, but written as decimals) and performing a fast-fourier transform on a "window" of 600 at a time, shifting this window by 60 data points each time. Hence, this transform will output 600 fourier coefficients, 720 times - I will end up with a 600 x 720 matrix (rows, columns).
These data points are initially contained within a list and turned into a column vector after being FFT'd. The issue comes when I try to build the maxtrix from a loop - take the first 600 points, FFT them, and dump them in an empty array. Take the next 600, do the same thing, but now add these two columns together to make two rows, then three, then four... etc. I've been trying for several hours now, but whatever I try I cannot get it to work - it consistently outputs my "final" matrix (the one that was meant to be the generated 600 x 720) as being the exact same dimensions as each generated "block".
My code (relevant sections):
for i in range(npoints):
newdata.append(float(newy.readline())) #Read data from file
FFT_out = [] #Initialize empty FFT output array
window_size = 600 #Number of points in data "window"
window_skip = 60 #Number of points window moves across
j = 0 #FFT count variable
for i in range(0, npoints, window_skip):
block = np.fft.fft(newdata[i:i+window_size]) #FFT Computation of "window"
block = block[:, np.newaxis] #turn into column vector (n, 1)
if j == 0:
FFT_out = block
j = 1
else:
np.hstack((FFT_out, block))
j = j + 1
print("Shape of FFT matrix:")
print(np.shape(FFT_out))
print("Number of times FFT completed:")
print(j)
At this point, I'm willing to believe it's a fundamental flaw on my understanding of how Python does matrices or deals with arrays. I've tried reading about it, but I still cannot see where I'm going wrong. Any help would be greatly appreciated!
First thing to note is that Python is uses indentation to form blocks, so as posted you would only ever assign once to FFT_out and never actually call np.hstack.
Then assuming that this was in fact only a cut&paste issue when posting your question, you should note that hstack returns a concatenation of its arguments without actually modifying them. To accumulate the concatenation, you should then assign the result back to FFT_out:
FFT_out = np.hstack((FFT_out, block))
You should then be able to get a 600 x 720 matrix with:
for i in range(0, npoints, window_skip):
block = np.fft.fft(newdata[i:i+window_size])
block = block[:, np.newaxis] #turn into column vector (n, 1)
if j == 0:
FFT_out = block
j = 1
else:
FFT_out = np.hstack((FFT_out, block))
j = j + 1