So the task is to optimise a Neural Network with a PSO. The PSO needs a one-dimensional list of all the weights and biases, like so [0.1 0.244 ... 0.214]. The NN needs an array of arrays with different dimensions, like so [[x,y], [m,n], ...(all the hidden layer matrices)... ,[p,q]] X and y are the dimensions for the input layer, then all the hidden layers and finally p and q - the dimensions of the output layer.
I can easily flatten the array to pass it to the PSO, but I need a method that takes the modified array and reshapes it back into the same array of arrays with the same dimensions as the starting one from the NN.
The dimensions depend on the amount of neurons in a layer, we have that information from the start.
I have tried to keep track of the shapes array and create an indices array to know when to stop but it doesn't seem to work. I am trying something with slicing now but no cigar yet. A modification to the NN is also possible but how to create it so it takes a predefined list of weights? There might be a very nice and efficient way to do it but I just haven't thought of it yet... Any suggestions?
Example:
a = np.array([1,2,3])
b = np.array([7,8,9,10])
c = np.array([12,13,14,15,16])
b.reshape(2,2)
arr = []
arr.append(a)
arr.append(b)
arr.append(c)
This is a very simple example of what the list of weights is as the NN works with it - a list of multi-dimensional array. Arr can be converted into a numpy array of objects if necessary with np.asarray(arr).
Flattening is easy, here is how I do it (there might be a better that doesn't need a loop, if you know, I'd be thankful if you shared).
Flattening:
new_arr = np.array([])
for i in range(len(arr)):
new_arr = np.append(arr, arr[i].flatten())
My question is how to take new_arr and put it back together to look like arr and is there a beautiful and fast way to do it.
You can save the shape in a variable (it's just a tuple). Try something like:
...
old_shape = arr.shape
# ... do flattening here
new_arr.reshape(old_shape)
new_arr = np.array([])
shapes=[]
for i in range(len(arr)):
new_arr = np.append(new_arr, arr[i].flatten())
shapes.append(arr[i].shape)
#do whatever
restoredArray =[]
offset=0
for i in range(len(shapes)):
s = shapes[i]
n = np.prod(s)
restoredArray.append(new_arr[offset:(offset+n)].reshape(s))
offset+=n
Related
It may be an easy problem but I could not find any practical solution. My code has following code segment involving 3 nested for loops. The target is to create a specialized intensity matrix for my algorithms for both prediction and ground_truth image matrix as follows:
for i in range (batch):
for j in range (img_width):
for k in range (img_height):
tensor=prediction[i][j][:]-prediction[i][k][:]
extracted_intensity_pred[i][j][k]=torch.norm(tensor, 2)
tensor=ground truth[i][j][:]-ground_truth[i][k][:]
extracted_intensity_ground_truth[i][j][k]=torch.norm(tensor, 2)
This nested for loop structure is slowing the execution intensively. Is there any broadcasting implementation(in numpy or pytorch tensor based) that may be used?
first lets clean up some notation; [:] does nothing
But first what's the dimensions, mostly 3d?
for i in range (batch):
for j in range (img_width):
for k in range (img_height):
tensor = prediction[i,j,:] - prediction[i,k,:]
# looks like a prediction[:,:,None]-prediction[:,None,:]; making 4d?
extracted_intensity_pred[i,j,k] = torch.norm(tensor, 2)
# what can torch.norm work with?
so maybe it's just
tensor = prediction[:,:,None] - prediction(:,None,:]
extracted_intensity_pred = torch.norm(tensor, ?)
I have 4 square arrays of the same shape
array1 = 1*np.ones((10,10))
array2 = 2*np.ones((10,10))
array3 = 3*np.ones((10,10))
array4 = 4*np.ones((10,10))
I want to recombine them into one big array in an interleaved mosaic pattern as such:
result = np.asarray([[1,2,1,2,...,1,2],\
[3,4,3,4,...,3,4],\
[1,2,1,2,...,1,2],\
...
[3,4,3,4,...,3,4]])
Where result is twice as big in both dimensions as the original individual images.
Is there an efficient way to do this?
To illustrate my question, I used arrays containing constant values but in reality, these 4 arrays would be different images.
Two common approaches for interlacing data in numpy are:
A) Assign each source to a slice of a blank result array, corresponding to where the data should go:
result = np.zeros((20, 20)) # allocate space
result[::2, ::2] = array1 # put those values in the appropriate spots
result[::2, 1::2] = array2
result[1::2, ::2] = array3
result[1::2, 1::2] = array4
B) use stacking to stick the data together in a single array, and then reshape to flatten the data in a way that leaves it interlaced. This typically requires a bit of trial and error, but after playing around with the REPL a bit I came up with:
result = np.hstack((np.dstack((array1, array2)), np.dstack((array3, array4)))).reshape(20, 20)
I am trying to sample with replacement a base 2D numpy array with shape of (4,2) by rows, say 10 times. The final output should be a 3D numpy array.
Have tried the code below, it works. But is there a way to do it without the for loop?
base=np.array([[20,30],[50,60],[70,80],[10,30]])
print(np.shape(base))
nsample=10
tmp=np.zeros((np.shape(base)[0],np.shape(base)[1],10))
for i in range(nsample):
id_pick = np.random.choice(np.shape(base)[0], size=(np.shape(base)[0]))
print(id_pick)
boot1=base[id_pick,:]
tmp[:,:,i]=boot1
print(tmp)
Here's one vectorized approach -
m,n = base.shape
idx = np.random.randint(0,m,(m,nsample))
out = base[idx].swapaxes(1,2)
Basic idea is that we generate all the possible indices with np.random.randint as idx. That would an array of shape (m,nsample). We use this array to index into the input array along the first axis. Thus, it selects random rows off base. To get the final output with a shape (m,n,nsample), we need to swap last two axes.
You can use the stack function from numpy. Your code would then look like:
base=np.array([[20,30],[50,60],[70,80],[10,30]])
print(np.shape(base))
nsample=10
tmp = []
for i in range(nsample):
id_pick = np.random.choice(np.shape(base)[0], size=(np.shape(base)[0]))
print(id_pick)
boot1=base[id_pick,:]
tmp.append(boot1)
tmp = np.stack(tmp, axis=-1)
print(tmp)
Based on #Divakar 's answer, if you already know the shape of this 2D-array, you can treat it as an (8,) 1D array while bootstrapping, and then reshape it:
m, n = base.shape
flatbase = np.reshape(base, (m*n,))
idxs = np.random.choice(range(8), (numReps, m*n))
bootflats = flatbase[idx]
boots = np.reshape(flatbase, (numReps, m, n))
I am trying to vectorize an operation using numpy, which I use in a python script that I have profiled, and found this operation to be the bottleneck and so needs to be optimized since I will run it many times.
The operation is on a data set of two parts. First, a large set (n) of 1D vectors of different lengths (with maximum length, Lmax) whose elements are integers from 1 to maxvalue. The set of vectors is arranged in a 2D array, data, of size (num_samples,Lmax) with trailing elements in each row zeroed. The second part is a set of scalar floats, one associated with each vector, that I have a computed and which depend on its length and the integer-value at each position. The set of scalars is made into a 1D array, Y, of size num_samples.
The desired operation is to form the average of Y over the n samples, as a function of (value,position along length,length).
This entire operation can be vectorized in matlab with use of the accumarray function: by using 3 2D arrays of the same size as data, whose elements are the corresponding value, position, and length indices of the desired final array:
sz_Y = num_samples;
sz_len = Lmax
sz_pos = Lmax
sz_val = maxvalue
ind_len = repmat( 1:sz_len ,1 ,sz_samples);
ind_pos = repmat( 1:sz_pos ,sz_samples,1 );
ind_val = data
ind_Y = repmat((1:sz_Y)',1 ,Lmax );
copiedY=Y(ind_Y);
mask = data>0;
finalarr=accumarray({ind_val(mask),ind_pos(mask),ind_len(mask)},copiedY(mask), [sz_val sz_pos sz_len])/sz_val;
I was hoping to emulate this implementation with np.bincounts. However, np.bincounts differs to accumarray in two relevant ways:
both arguments must be of same 1D size, and
there is no option to choose the shape of the output array.
In the above usage of accumarray, the list of indices, {ind_val(mask),ind_pos(mask),ind_len(mask)}, is 1D cell array of 1x3 arrays used as index tuples, while in np.bincounts it must be 1D scalars as far as I understand. I expect np.ravel may be useful but am not sure how to use it here to do what I want. I am coming to python from matlab and some things do not translate directly, e.g. the colon operator which ravels in opposite order to ravel. So my question is how might I use np.bincount or any other numpy method to achieve an efficient python implementation of this operation.
EDIT: To avoid wasting time: for these multiD index problems with complicated index manipulation, is the recommend route to just use cython to implement the loops explicity?
EDIT2: Alternative Python implementation I just came up with.
Here is a heavy ram solution:
First precalculate:
Using index units for length (i.e., length 1 =0) make a 4D bool array, size (num_samples,Lmax+1,Lmax+1,maxvalue) , holding where the conditions are satisfied for each value in Y.
ALLcond=np.zeros((num_samples,Lmax+1,Lmax+1,maxvalue+1),dtype='bool')
for l in range(Lmax+1):
for i in range(Lmax+1):
for v in range(maxvalue+!):
ALLcond[:,l,i,v]=(data[:,i]==v) & (Lvec==l)`
Where Lvec=[len(row) for row in data]. Then get the indices for these using np.where and initialize a 4D float array into which you will assign the values of Y:
[indY,ind_len,ind_pos,ind_val]=np.where(ALLcond)
Yval=np.zeros(np.shape(ALLcond),dtype='float')
Now in the loop in which I have to perform the operation, I compute it with the two lines:
Yval[ind_Y,ind_len,ind_pos,ind_val]=Y[ind_Y]
Y_avg=sum(Yval)/num_samples
This gives a factor of 4 or so speed up over the direct loop implementation. I was expecting more. Perhaps, this is a more tangible implementation for Python heads to digest. Any faster suggestions are welcome :)
One way is to convert the 3 "indices" to a linear index and then apply bincount. Numpy's ravel_multi_index is essentially the same as MATLAB's sub2ind. So the ported code could be something like:
shape = (Lmax+1, Lmax+1, maxvalue+1)
posvec = np.arange(1, Lmax+1)
ind_len = np.tile(Lvec[:,None], [1, Lmax])
ind_pos = np.tile(posvec, [n, 1])
ind_val = data
Y_copied = np.tile(Y[:,None], [1, Lmax])
mask = posvec <= Lvec[:,None] # fill-value independent
lin_idx = np.ravel_multi_index((ind_len[mask], ind_pos[mask], ind_val[mask]), shape)
Y_avg = np.bincount(lin_idx, weights=Y_copied[mask], minlength=np.prod(shape)) / n
Y_avg.shape = shape
This is assuming data has shape (n, Lmax), Lvec is Numpy array, etc. You may need to adapt the code a little to get rid of off-by-one errors.
One could argue that the tile operations are not very efficient and not very "numpythonic". Something with broadcast_arrays could be nice, but I think I prefer this way:
shape = (Lmax+1, Lmax+1, maxvalue+1)
posvec = np.arange(1, Lmax+1)
len_idx = np.repeat(Lvec, Lvec)
pos_idx = np.broadcast_to(posvec, data.shape)[mask]
val_idx = data[mask]
Y_copied = np.repeat(Y, Lvec)
mask = posvec <= Lvec[:,None] # fill-value independent
lin_idx = np.ravel_multi_index((len_idx, pos_idx, val_idx), shape)
Y_avg = np.bincount(lin_idx, weights=Y_copied, minlength=np.prod(shape)) / n
Y_avg.shape = shape
Note broadcast_to was added in Numpy 1.10.0.
I have a list of several hundred 10x10 arrays that I want to stack together into a single Nx10x10 array. At first I tried a simple
newarray = np.array(mylist)
But that returned with "ValueError: setting an array element with a sequence."
Then I found the online documentation for dstack(), which looked perfect: "...This is a simple way to stack 2D arrays (images) into a single 3D array for processing." Which is exactly what I'm trying to do. However,
newarray = np.dstack(mylist)
tells me "ValueError: array dimensions must agree except for d_0", which is odd because all my arrays are 10x10. I thought maybe the problem was that dstack() expects a tuple instead of a list, but
newarray = np.dstack(tuple(mylist))
produced the same result.
At this point I've spent about two hours searching here and elsewhere to find out what I'm doing wrong and/or how to go about this correctly. I've even tried converting my list of arrays into a list of lists of lists and then back into a 3D array, but that didn't work either (I ended up with lists of lists of arrays, followed by the "setting array element as sequence" error again).
Any help would be appreciated.
newarray = np.dstack(mylist)
should work. For example:
import numpy as np
# Here is a list of five 10x10 arrays:
x = [np.random.random((10,10)) for _ in range(5)]
y = np.dstack(x)
print(y.shape)
# (10, 10, 5)
# To get the shape to be Nx10x10, you could use rollaxis:
y = np.rollaxis(y,-1)
print(y.shape)
# (5, 10, 10)
np.dstack returns a new array. Thus, using np.dstack requires as much additional memory as the input arrays. If you are tight on memory, an alternative to np.dstack which requires less memory is to
allocate space for the final array first, and then pour the input arrays into it one at a time.
For example, if you had 58 arrays of shape (159459, 2380), then you could use
y = np.empty((159459, 2380, 58))
for i in range(58):
# instantiate the input arrays one at a time
x = np.random.random((159459, 2380))
# copy x into y
y[..., i] = x