I'm extracting some features from some data generated with an accelerometer and I have the following arrays:
X_mfccs_processed (list with 40 values)
Y_mfccs_processed (list with 40 values)
Z_mfccs_processed (list with 40 values)
X_mean (1 value)
Y_mean (1 value)
Z_mean (1 value)
At the moment i'm able to create a 3D array [shape=(1,40,3)] and insert into it my mfcss arrays
self.extracted_features = np.ndarray(shape=(1, len(self.X_mfccs_processed), 3))
self.extracted_features[:,:,0] = self.X_mfccs_processed
self.extracted_features[:,:,1] = self.Y_mfccs_processed
self.extracted_features[:,:,2] = self.Z_mfccs_processed
My question is: How can i create a 4D array [shape=(1,40,1,3)] where to store also my mean values?
To make the array, instead of assigning values to a preallocated array a better way is:
self.extracted_features = np.array([X_mfccs_processed,Y_mfccs_processed,Z_mfccs_processed]).T[None,...]
or equivalently:
self.extracted_features = np.array([X_mfccs_processed,Y_mfccs_processed,Z_mfccs_processed]).T.reshape(1,-1,3)
However, you cannot add another dimension with shape 1 and insert mean values in it. A dimension value is the number of elements along that dimension. An easy way to think about it is that a matrix of shape (1,N) is a 1xN matrix and it does not mean you can insert the mean in first dimension an a list of size N in the second dimension. You need to come up with another idea to store your means. I would suggest a separate array like this with shape (1,3,1):
self.extracted_features_mean = np.array([X_mean,Y_mean,Z_mean]).T[None,...]
And use similar indexing to access the mean. An alternative would be using dictionaries. Depending on your application, you can pick one that is easier and/or faster.
Usually np.reshape(self.extracted_features, (1,40,1,3)) works well.
The shape would have to be different to store the mean values as well. There isn't enough space.
(1,40,1,6) or (1,40,2,3)
seem reasonable shapes.
for (1,40,1,6)
self.extracted_features = np.ndarray(shape=(1, len(self.X_mfccs_processed), 1, 6))
self.extracted_features[:,:,:,0] = self.X_mfccs_processed
self.extracted_features[:,:,:,1] = self.Y_mfccs_processed
self.extracted_features[:,:,:,2] = self.Z_mfccs_processed
self.extracted_features[:,:,:,3] = self.X_mean
self.extracted_features[:,:,:,4] = self.Y_mean
self.extracted_features[:,:,:,5] = self.Z_mean
for (1,40,2,3)
self.extracted_features = np.ndarray(shape=(1, len(self.X_mfccs_processed), 2, 3))
self.extracted_features[:,:,0,0] = self.X_mfccs_processed
self.extracted_features[:,:,0,1] = self.Y_mfccs_processed
self.extracted_features[:,:,0,2] = self.Z_mfccs_processed
self.extracted_features[:,:,1,0] = self.X_mean
self.extracted_features[:,:,1,1] = self.Y_mean
self.extracted_features[:,:,1,2] = self.Z_mean
I should mention this casts the mean values meaning that it duplicates them (40 times). This would be bad for storage but if you doing some type of machine learning or numerics this might be a good tradeoff. Alternatively you could do a (1,41,1,3) shape.
Related
I am trying to update very specific indices of a multidimensional tensor in Pytorch, and I am not sure how to access the correct indices. I can do this in a very straightforward way in Numpy:
import numpy as np
#set up the array containing the data
data = 100*np.ones((10,10,2))
data[5:,:,:] = 0
#select the data points that I want to update
idxs = np.nonzero(data.sum(2))
#generate the updates that I am going to do
updates = np.random.randint(5,size=(idxs[0].shape[0],2))
#update the data
data[idxs[0],idxs[1],:] = updates
I need to implement this in Pytorch but I am not sure how to do this. It seems like I need the scatter function but that only works along a single dimension instead of the multiple dimensions that I need. How can I do this?
These operations work exactly the same in their PyTorch counterparts, except for torch.nonzero, which by default returns a tensor of size [z, n] (where z is the number of non-zero elements and n the number of dimensions) instead of a tuple of n tensors with size [z] (as NumPy does), but that behaviour can be changed by setting as_tuple=True.
Other than that you can directly translate it to PyTorch, but you need to make sure that the types match, because you cannot assign a tensor of type torch.long (default of torch.randint) to a tensor of type torch.float (default of torch.ones). In this case, data is probably meant to have type torch.long:
#set up the array containing the data
data = 100*torch.ones((10,10,2), dtype=torch.long)
data[5:,:,:] = 0
#select the data points that I want to update
idxs = torch.nonzero(data.sum(2), as_tuple=True)
#generate the updates that I am going to do
updates = torch.randint(5,size=(idxs[0].shape[0],2))
#update the data
data[idxs[0],idxs[1],:] = updates
I have 4 square arrays of the same shape
array1 = 1*np.ones((10,10))
array2 = 2*np.ones((10,10))
array3 = 3*np.ones((10,10))
array4 = 4*np.ones((10,10))
I want to recombine them into one big array in an interleaved mosaic pattern as such:
result = np.asarray([[1,2,1,2,...,1,2],\
[3,4,3,4,...,3,4],\
[1,2,1,2,...,1,2],\
...
[3,4,3,4,...,3,4]])
Where result is twice as big in both dimensions as the original individual images.
Is there an efficient way to do this?
To illustrate my question, I used arrays containing constant values but in reality, these 4 arrays would be different images.
Two common approaches for interlacing data in numpy are:
A) Assign each source to a slice of a blank result array, corresponding to where the data should go:
result = np.zeros((20, 20)) # allocate space
result[::2, ::2] = array1 # put those values in the appropriate spots
result[::2, 1::2] = array2
result[1::2, ::2] = array3
result[1::2, 1::2] = array4
B) use stacking to stick the data together in a single array, and then reshape to flatten the data in a way that leaves it interlaced. This typically requires a bit of trial and error, but after playing around with the REPL a bit I came up with:
result = np.hstack((np.dstack((array1, array2)), np.dstack((array3, array4)))).reshape(20, 20)
So I am a little new to using matrices in Python, and I am looking for the best way to perform the following operation.
Say I have a vector of an arbitrary length, like this:
data = np.array(range(255))
And I want to fit this data inside a matrix with a shape like so:
concept = np.zeros((3, 9, 6))
Now, obviously this will not fit, and results in an error:
ValueError: cannot reshape array of size 255 into shape (3,9,6)
What would be the best way to go about fitting as much of the data vector inside the first matrix with the shape (3, 9, 6) while making sure any "overflow" is stored in a second (or third, fourth, etc.) matrix?
Does this make sense?
Basically, I want to be able to take a vector of any size and produce an arbitrary amount of matrices that have the data shaped according to the 3, 9, 6 dimensions.
Thank you for your help.
def each_matrix(a, dims):
size = dims.prod()
padded = np.concatenate([ a, np.zeros(size-1) ])
for i in range(len(padded) / size):
yield padded[i*size : (i+1)*size].reshape(dims)
for matrix in each_matrix(np.array(range(255)),
dims=np.array([ 3, 9, 6 ])):
print(str(matrix) + '\n\n-------\n')
This will fill the last matrix with zeros.
Here is a rough solution to your problem.
def split_padded(a,n):
padding = n - len(data)%n
numOfsplit = int(len(data)/n)+1
print padding, numOfsplit
return np.split(np.concatenate((a,np.zeros(padding))),numOfsplit)
data = np.array(range(255))
splitnum = 3*9*6
splitdata = split_padded(data,splitnum)
for mat in splitdata:
print mat.reshape(3,9,6)
It is very rough and works for 1D input for array.
First, calculating the number of 0 we need to pad in padding and then calculating the number of matrices we can get out of input data in numOfsplit and doing the splitting in last line.
I am trying to vectorize an operation using numpy, which I use in a python script that I have profiled, and found this operation to be the bottleneck and so needs to be optimized since I will run it many times.
The operation is on a data set of two parts. First, a large set (n) of 1D vectors of different lengths (with maximum length, Lmax) whose elements are integers from 1 to maxvalue. The set of vectors is arranged in a 2D array, data, of size (num_samples,Lmax) with trailing elements in each row zeroed. The second part is a set of scalar floats, one associated with each vector, that I have a computed and which depend on its length and the integer-value at each position. The set of scalars is made into a 1D array, Y, of size num_samples.
The desired operation is to form the average of Y over the n samples, as a function of (value,position along length,length).
This entire operation can be vectorized in matlab with use of the accumarray function: by using 3 2D arrays of the same size as data, whose elements are the corresponding value, position, and length indices of the desired final array:
sz_Y = num_samples;
sz_len = Lmax
sz_pos = Lmax
sz_val = maxvalue
ind_len = repmat( 1:sz_len ,1 ,sz_samples);
ind_pos = repmat( 1:sz_pos ,sz_samples,1 );
ind_val = data
ind_Y = repmat((1:sz_Y)',1 ,Lmax );
copiedY=Y(ind_Y);
mask = data>0;
finalarr=accumarray({ind_val(mask),ind_pos(mask),ind_len(mask)},copiedY(mask), [sz_val sz_pos sz_len])/sz_val;
I was hoping to emulate this implementation with np.bincounts. However, np.bincounts differs to accumarray in two relevant ways:
both arguments must be of same 1D size, and
there is no option to choose the shape of the output array.
In the above usage of accumarray, the list of indices, {ind_val(mask),ind_pos(mask),ind_len(mask)}, is 1D cell array of 1x3 arrays used as index tuples, while in np.bincounts it must be 1D scalars as far as I understand. I expect np.ravel may be useful but am not sure how to use it here to do what I want. I am coming to python from matlab and some things do not translate directly, e.g. the colon operator which ravels in opposite order to ravel. So my question is how might I use np.bincount or any other numpy method to achieve an efficient python implementation of this operation.
EDIT: To avoid wasting time: for these multiD index problems with complicated index manipulation, is the recommend route to just use cython to implement the loops explicity?
EDIT2: Alternative Python implementation I just came up with.
Here is a heavy ram solution:
First precalculate:
Using index units for length (i.e., length 1 =0) make a 4D bool array, size (num_samples,Lmax+1,Lmax+1,maxvalue) , holding where the conditions are satisfied for each value in Y.
ALLcond=np.zeros((num_samples,Lmax+1,Lmax+1,maxvalue+1),dtype='bool')
for l in range(Lmax+1):
for i in range(Lmax+1):
for v in range(maxvalue+!):
ALLcond[:,l,i,v]=(data[:,i]==v) & (Lvec==l)`
Where Lvec=[len(row) for row in data]. Then get the indices for these using np.where and initialize a 4D float array into which you will assign the values of Y:
[indY,ind_len,ind_pos,ind_val]=np.where(ALLcond)
Yval=np.zeros(np.shape(ALLcond),dtype='float')
Now in the loop in which I have to perform the operation, I compute it with the two lines:
Yval[ind_Y,ind_len,ind_pos,ind_val]=Y[ind_Y]
Y_avg=sum(Yval)/num_samples
This gives a factor of 4 or so speed up over the direct loop implementation. I was expecting more. Perhaps, this is a more tangible implementation for Python heads to digest. Any faster suggestions are welcome :)
One way is to convert the 3 "indices" to a linear index and then apply bincount. Numpy's ravel_multi_index is essentially the same as MATLAB's sub2ind. So the ported code could be something like:
shape = (Lmax+1, Lmax+1, maxvalue+1)
posvec = np.arange(1, Lmax+1)
ind_len = np.tile(Lvec[:,None], [1, Lmax])
ind_pos = np.tile(posvec, [n, 1])
ind_val = data
Y_copied = np.tile(Y[:,None], [1, Lmax])
mask = posvec <= Lvec[:,None] # fill-value independent
lin_idx = np.ravel_multi_index((ind_len[mask], ind_pos[mask], ind_val[mask]), shape)
Y_avg = np.bincount(lin_idx, weights=Y_copied[mask], minlength=np.prod(shape)) / n
Y_avg.shape = shape
This is assuming data has shape (n, Lmax), Lvec is Numpy array, etc. You may need to adapt the code a little to get rid of off-by-one errors.
One could argue that the tile operations are not very efficient and not very "numpythonic". Something with broadcast_arrays could be nice, but I think I prefer this way:
shape = (Lmax+1, Lmax+1, maxvalue+1)
posvec = np.arange(1, Lmax+1)
len_idx = np.repeat(Lvec, Lvec)
pos_idx = np.broadcast_to(posvec, data.shape)[mask]
val_idx = data[mask]
Y_copied = np.repeat(Y, Lvec)
mask = posvec <= Lvec[:,None] # fill-value independent
lin_idx = np.ravel_multi_index((len_idx, pos_idx, val_idx), shape)
Y_avg = np.bincount(lin_idx, weights=Y_copied, minlength=np.prod(shape)) / n
Y_avg.shape = shape
Note broadcast_to was added in Numpy 1.10.0.
I have a list which contains 1000 integers. The 1000 integers represent 20X50 elements of dimensional array which I read from a file into the list.
I need to walk through the list with an indicator in order to find close elements to each other. I want that my indicator will be represented not only by a simple index i, but as a two indices x,y so I can know where is my indicator along the list.
I tried to reshape the list like that:
data = np.array( l )
shape = ( 20, 50 )
data.reshape( shape )
but I don't know how to access the data array.
Update: Is there any way to find the indices of x, y for an integers that are smaller than NUM(let's say NUM=12)
According to documentation of numpy.reshape , it returns a new array object with the new shape specified by the parameters (given that, with the new shape, the amount of elements in the array remain unchanged) , without changing the shape of the original object, so when you are calling the data.reshape() function you should also assign it back to data for it to reflect in data.
Example -
data = data.reshape( shape ) # where shape = (20,50)
Also, another way to change the shape, is to directly assign the new shape to the data.shape property.
Example -
shape = (20,50)
data.shape = shape # where shape is the new shape