Fastest way to fill numpy array with new arrays from function - python

I have a function f(a) that takes one entry from a testarray and returns an array with 5 values:
f(testarray[0])
#Output: array([[0, 1, 5, 3, 2]])
Since f(testarray[0]) is the result of an experiment, I want to run this function f for each entry of the testarray and store each result in a new NumPy array. I always thought this would be quite simple by just taking an empty NumPy array with the length of the testarray and save the results the following way:
N = 1000 #Number of entries of the testarray
test_result = np.zeros([N, 5], dtype=int)
for i in testarray:
test_result[i] = f(i)
When I run this, I don't receive any error message but nonsense results (half of the test_result is empty while the rest is filled with implausible values). Since f() works perfectly for a single entry of the testarray I suppose that something of the way of how I save the results in the test_result is wrong. What am I missing here?
(I know that I could save the results as list and then append an empty list, but this method is too slow for the large number of times I want to run the function).

Since you don't seem to understand indexing, stick with this approach
alist = [f(i) for i in testarray]
arr = np.array(alist)
I could show how to use row indices and testarray values together, but that requires more explanation.

Your problem may could be reproduced by the following small example:
testarray = np.array([5, 6, 7, 3, 1])
def f(x):
return np.array([x * i for i in np.arange(1, 6)])
f(testarray[0])
# [ 5 10 15 20 25]
test_result = np.zeros([len(testarray), 5], dtype=int) # len(testarray) or testarray.shape[0]
So, as hpaulj mentioned in the comments, you must be careful how to use indexing:
for i in range(len(testarray)):
test_result[i] = f(testarray[i])
# [[ 5 10 15 20 25]
# [ 6 12 18 24 30]
# [ 7 14 21 28 35]
# [ 3 6 9 12 15]
# [ 1 2 3 4 5]]
There will be another condition where the testarray is a specified index array that contains shuffle integers from 0 to N to full fill the zero array i.e. test_result. For this condition we can create a reproducible example as:
testarray = np.array([4, 3, 0, 1, 2])
def f(x):
return np.array([x * i for i in np.arange(1, 6)])
f(testarray[0])
# [ 4 8 12 16 20]
test_result = np.zeros([len(testarray), 5], dtype=int)
So, using your loop will get the following result:
for i in testarray:
test_result[i] = f(i)
# [[ 0 0 0 0 0]
# [ 1 2 3 4 5]
# [ 2 4 6 8 10]
# [ 3 6 9 12 15]
# [ 4 8 12 16 20]]
As it can be understand from this loop, if the index array be not from 0 to N, some rows in the zero array will left zero (unchanged):
testarray = np.array([4, 2, 4, 1, 2])
for i in testarray:
test_result[i] = f(i)
# [[ 0 0 0 0 0] # <--
# [ 1 2 3 4 5]
# [ 2 4 6 8 10]
# [ 0 0 0 0 0] # <--
# [ 4 8 12 16 20]]

Related

Cumulative subtraction from first row

I have one series and one DataFrame, all integers.
s = [10,
10,
10]
m = [[0,0,0,0,3,4,5],
[0,0,0,0,1,1,1],
[10,0,0,0,0,5,5]]
I want to return a matrix containing the cumulative differences to take the place of the existing number.
Output:
n = [[10,10,10,10,7,3,-2],
[10,10,10,10,9,8,7],
[0,0,0,0,0,-5,-10]]
Calculate the cumsum of data frame by row first and then subtract from the Series:
import pandas as pd
s = pd.Series(s)
df = pd.DataFrame(m)
-df.cumsum(1).sub(s, axis=0)
# 0 1 2 3 4 5 6
#0 10 10 10 10 7 3 -2
#1 10 10 10 10 9 8 7
#2 0 0 0 0 0 -5 -10
You can directly compute a cumulative difference using np.subtract.accumulate:
# make a copy
>>> n = np.array(m)
# replace first column
>>> n[:, 0] = s - n[:, 0]
# subtract in-place
>>> np.subtract.accumulate(n, axis=1, out=n)
array([[ 10, 10, 10, 10, 7, 3, -2],
[ 10, 10, 10, 10, 9, 8, 7],
[ 0, 0, 0, 0, 0, -5, -10]])

Extract a block of rows from 2D numpy

I know this question might be trivial but I am in the learning process. Given numpy 2D array, I want to take a block of rows using slicing approach. For instance, from the following matrix, I want to extract only the first three rows, so from:
[[ 1 2 3 4]
[ 5 6 7 8]
[ 9 10 11 12]
[ 28 9 203 102]
[577 902 11 101]]
I want:
[[ 1 2 3 4]
[ 5 6 7 8]
[ 9 10 11 12]]
My code here actually still missing something. I appreciate any hint.
X = [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [28, 9, 203, 102], [577, 902, 11, 101]]
X = np.array(X)
X_sliced = X[3,:]
print(X_sliced)
Numpy matrices can be thought of as nested lists of lists. Element 1 is list 1, element 2 is list 2, and so on.
You can pull out a single row with x[n], where n is the row number you want.
You can pull out a range of rows with x[n:m], where n is the first row and m is the final row.
If you leave out n or m and do x[n:] or x[:m], Python will fill in the blank with either the start or beginning of the list. For example, x[n:] will return all rows from n to the end, and x[:m] will return all rows from the start to m.
You can accomplish what you want by doing x[:3], which is equivalent to asking for x[0:3].

split numpy multidimensional array into equal pieces [duplicate]

Is there a way to slice a 2d array in numpy into smaller 2d arrays?
Example
[[1,2,3,4], -> [[1,2] [3,4]
[5,6,7,8]] [5,6] [7,8]]
So I basically want to cut down a 2x4 array into 2 2x2 arrays. Looking for a generic solution to be used on images.
There was another question a couple of months ago which clued me in to the idea of using reshape and swapaxes. The h//nrows makes sense since this keeps the first block's rows together. It also makes sense that you'll need nrows and ncols to be part of the shape. -1 tells reshape to fill in whatever number is necessary to make the reshape valid. Armed with the form of the solution, I just tried things until I found the formula that works.
You should be able to break your array into "blocks" using some combination of reshape and swapaxes:
def blockshaped(arr, nrows, ncols):
"""
Return an array of shape (n, nrows, ncols) where
n * nrows * ncols = arr.size
If arr is a 2D array, the returned array should look like n subblocks with
each subblock preserving the "physical" layout of arr.
"""
h, w = arr.shape
assert h % nrows == 0, f"{h} rows is not evenly divisible by {nrows}"
assert w % ncols == 0, f"{w} cols is not evenly divisible by {ncols}"
return (arr.reshape(h//nrows, nrows, -1, ncols)
.swapaxes(1,2)
.reshape(-1, nrows, ncols))
turns c
np.random.seed(365)
c = np.arange(24).reshape((4, 6))
print(c)
[out]:
[[ 0 1 2 3 4 5]
[ 6 7 8 9 10 11]
[12 13 14 15 16 17]
[18 19 20 21 22 23]]
into
print(blockshaped(c, 2, 3))
[out]:
[[[ 0 1 2]
[ 6 7 8]]
[[ 3 4 5]
[ 9 10 11]]
[[12 13 14]
[18 19 20]]
[[15 16 17]
[21 22 23]]]
I've posted an inverse function, unblockshaped, here, and an N-dimensional generalization here. The generalization gives a little more insight into the reasoning behind this algorithm.
Note that there is also superbatfish's
blockwise_view. It arranges the
blocks in a different format (using more axes) but it has the advantage of (1)
always returning a view and (2) being capable of handling arrays of any
dimension.
It seems to me that this is a task for numpy.split or some variant.
e.g.
a = np.arange(30).reshape([5,6]) #a.shape = (5,6)
a1 = np.split(a,3,axis=1)
#'a1' is a list of 3 arrays of shape (5,2)
a2 = np.split(a, [2,4])
#'a2' is a list of three arrays of shape (2,5), (2,5), (1,5)
If you have a NxN image you can create, e.g., a list of 2 NxN/2 subimages, and then divide them along the other axis.
numpy.hsplit and numpy.vsplit are also available.
There are some other answers that seem well-suited for your specific case already, but your question piqued my interest in the possibility of a memory-efficient solution usable up to the maximum number of dimensions that numpy supports, and I ended up spending most of the afternoon coming up with possible method. (The method itself is relatively simple, it's just that I still haven't used most of the really fancy features that numpy supports so most of the time was spent researching to see what numpy had available and how much it could do so that I didn't have to do it.)
def blockgen(array, bpa):
"""Creates a generator that yields multidimensional blocks from the given
array(_like); bpa is an array_like consisting of the number of blocks per axis
(minimum of 1, must be a divisor of the corresponding axis size of array). As
the blocks are selected using normal numpy slicing, they will be views rather
than copies; this is good for very large multidimensional arrays that are being
blocked, and for very large blocks, but it also means that the result must be
copied if it is to be modified (unless modifying the original data as well is
intended)."""
bpa = np.asarray(bpa) # in case bpa wasn't already an ndarray
# parameter checking
if array.ndim != bpa.size: # bpa doesn't match array dimensionality
raise ValueError("Size of bpa must be equal to the array dimensionality.")
if (bpa.dtype != np.int # bpa must be all integers
or (bpa < 1).any() # all values in bpa must be >= 1
or (array.shape % bpa).any()): # % != 0 means not evenly divisible
raise ValueError("bpa ({0}) must consist of nonzero positive integers "
"that evenly divide the corresponding array axis "
"size".format(bpa))
# generate block edge indices
rgen = (np.r_[:array.shape[i]+1:array.shape[i]//blk_n]
for i, blk_n in enumerate(bpa))
# build slice sequences for each axis (unfortunately broadcasting
# can't be used to make the items easy to operate over
c = [[np.s_[i:j] for i, j in zip(r[:-1], r[1:])] for r in rgen]
# Now to get the blocks; this is slightly less efficient than it could be
# because numpy doesn't like jagged arrays and I didn't feel like writing
# a ufunc for it.
for idxs in np.ndindex(*bpa):
blockbounds = tuple(c[j][idxs[j]] for j in range(bpa.size))
yield array[blockbounds]
You question practically the same as this one. You can use the one-liner with np.ndindex() and reshape():
def cutter(a, r, c):
lenr = a.shape[0]/r
lenc = a.shape[1]/c
np.array([a[i*r:(i+1)*r,j*c:(j+1)*c] for (i,j) in np.ndindex(lenr,lenc)]).reshape(lenr,lenc,r,c)
To create the result you want:
a = np.arange(1,9).reshape(2,1)
#array([[1, 2, 3, 4],
# [5, 6, 7, 8]])
cutter( a, 1, 2 )
#array([[[[1, 2]],
# [[3, 4]]],
# [[[5, 6]],
# [[7, 8]]]])
Some minor enhancement to TheMeaningfulEngineer's answer that handles the case when the big 2d array cannot be perfectly sliced into equally sized subarrays
def blockfy(a, p, q):
'''
Divides array a into subarrays of size p-by-q
p: block row size
q: block column size
'''
m = a.shape[0] #image row size
n = a.shape[1] #image column size
# pad array with NaNs so it can be divided by p row-wise and by q column-wise
bpr = ((m-1)//p + 1) #blocks per row
bpc = ((n-1)//q + 1) #blocks per column
M = p * bpr
N = q * bpc
A = np.nan* np.ones([M,N])
A[:a.shape[0],:a.shape[1]] = a
block_list = []
previous_row = 0
for row_block in range(bpc):
previous_row = row_block * p
previous_column = 0
for column_block in range(bpr):
previous_column = column_block * q
block = A[previous_row:previous_row+p, previous_column:previous_column+q]
# remove nan columns and nan rows
nan_cols = np.all(np.isnan(block), axis=0)
block = block[:, ~nan_cols]
nan_rows = np.all(np.isnan(block), axis=1)
block = block[~nan_rows, :]
## append
if block.size:
block_list.append(block)
return block_list
Examples:
a = np.arange(25)
a = a.reshape((5,5))
out = blockfy(a, 2, 3)
a->
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24]])
out[0] ->
array([[0., 1., 2.],
[5., 6., 7.]])
out[1]->
array([[3., 4.],
[8., 9.]])
out[-1]->
array([[23., 24.]])
For now it just works when the big 2d array can be perfectly sliced into equally sized subarrays.
The code bellow slices
a ->array([[ 0, 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10, 11],
[12, 13, 14, 15, 16, 17],
[18, 19, 20, 21, 22, 23]])
into this
block_array->
array([[[ 0, 1, 2],
[ 6, 7, 8]],
[[ 3, 4, 5],
[ 9, 10, 11]],
[[12, 13, 14],
[18, 19, 20]],
[[15, 16, 17],
[21, 22, 23]]])
p ang q determine the block size
Code
a = arange(24)
a = a.reshape((4,6))
m = a.shape[0] #image row size
n = a.shape[1] #image column size
p = 2 #block row size
q = 3 #block column size
block_array = []
previous_row = 0
for row_block in range(blocks_per_row):
previous_row = row_block * p
previous_column = 0
for column_block in range(blocks_per_column):
previous_column = column_block * q
block = a[previous_row:previous_row+p,previous_column:previous_column+q]
block_array.append(block)
block_array = array(block_array)
If you want a solution that also handles the cases when the matrix is
not equally divided, you can use this:
from operator import add
half_split = np.array_split(input, 2)
res = map(lambda x: np.array_split(x, 2, axis=1), half_split)
res = reduce(add, res)
Here is a solution based on unutbu's answer that handle case where matrix cannot be equally divided. In this case, it will resize the matrix before using some interpolation. You need OpenCV for this. Note that I had to swap ncols and nrows to make it works, didn't figured why.
import numpy as np
import cv2
import math
def blockshaped(arr, r_nbrs, c_nbrs, interp=cv2.INTER_LINEAR):
"""
arr a 2D array, typically an image
r_nbrs numbers of rows
r_cols numbers of cols
"""
arr_h, arr_w = arr.shape
size_w = int( math.floor(arr_w // c_nbrs) * c_nbrs )
size_h = int( math.floor(arr_h // r_nbrs) * r_nbrs )
if size_w != arr_w or size_h != arr_h:
arr = cv2.resize(arr, (size_w, size_h), interpolation=interp)
nrows = int(size_w // r_nbrs)
ncols = int(size_h // c_nbrs)
return (arr.reshape(r_nbrs, ncols, -1, nrows)
.swapaxes(1,2)
.reshape(-1, ncols, nrows))
a = np.random.randint(1, 9, size=(9,9))
out = [np.hsplit(x, 3) for x in np.vsplit(a,3)]
print(a)
print(out)
yields
[[7 6 2 4 4 2 5 2 3]
[2 3 7 6 8 8 2 6 2]
[4 1 3 1 3 8 1 3 7]
[6 1 1 5 7 2 1 5 8]
[8 8 7 6 6 1 8 8 4]
[6 1 8 2 1 4 5 1 8]
[7 3 4 2 5 6 1 2 7]
[4 6 7 5 8 2 8 2 8]
[6 6 5 5 6 1 2 6 4]]
[[array([[7, 6, 2],
[2, 3, 7],
[4, 1, 3]]), array([[4, 4, 2],
[6, 8, 8],
[1, 3, 8]]), array([[5, 2, 3],
[2, 6, 2],
[1, 3, 7]])], [array([[6, 1, 1],
[8, 8, 7],
[6, 1, 8]]), array([[5, 7, 2],
[6, 6, 1],
[2, 1, 4]]), array([[1, 5, 8],
[8, 8, 4],
[5, 1, 8]])], [array([[7, 3, 4],
[4, 6, 7],
[6, 6, 5]]), array([[2, 5, 6],
[5, 8, 2],
[5, 6, 1]]), array([[1, 2, 7],
[8, 2, 8],
[2, 6, 4]])]]
I publish my solution. Notice that this code doesn't' actually create copies of original array, so it works well with big data. Moreover, it doesn't crash if array cannot be divided evenly (but you can easly add condition for that by deleting ceil and checking if v_slices and h_slices are divided without rest).
import numpy as np
from math import ceil
a = np.arange(9).reshape(3, 3)
p, q = 2, 2
width, height = a.shape
v_slices = ceil(width / p)
h_slices = ceil(height / q)
for h in range(h_slices):
for v in range(v_slices):
block = a[h * p : h * p + p, v * q : v * q + q]
# do something with a block
This code changes (or, more precisely, gives you direct access to part of an array) this:
[[0 1 2]
[3 4 5]
[6 7 8]]
Into this:
[[0 1]
[3 4]]
[[2]
[5]]
[[6 7]]
[[8]]
If you need actual copies, Aenaon code is what you are looking for.
If you are sure that big array can be divided evenly, you can use numpy splitting tools.
to add to #Aenaon answer and his blockfy function, if you are working with COLOR IMAGES/ 3D ARRAY here is my pipeline to create crops of 224 x 224 for 3 channel input
def blockfy(a, p, q):
'''
Divides array a into subarrays of size p-by-q
p: block row size
q: block column size
'''
m = a.shape[0] #image row size
n = a.shape[1] #image column size
# pad array with NaNs so it can be divided by p row-wise and by q column-wise
bpr = ((m-1)//p + 1) #blocks per row
bpc = ((n-1)//q + 1) #blocks per column
M = p * bpr
N = q * bpc
A = np.nan* np.ones([M,N])
A[:a.shape[0],:a.shape[1]] = a
block_list = []
previous_row = 0
for row_block in range(bpc):
previous_row = row_block * p
previous_column = 0
for column_block in range(bpr):
previous_column = column_block * q
block = A[previous_row:previous_row+p, previous_column:previous_column+q]
# remove nan columns and nan rows
nan_cols = np.all(np.isnan(block), axis=0)
block = block[:, ~nan_cols]
nan_rows = np.all(np.isnan(block), axis=1)
block = block[~nan_rows, :]
## append
if block.size:
block_list.append(block)
return block_list
then extended above to
for file in os.listdir(path_to_crop): ### list files in your folder
img = io.imread(path_to_crop + file, as_gray=False) ### open image
r = blockfy(img[:,:,0],224,224) ### crop blocks of 224 x 224 for red channel
g = blockfy(img[:,:,1],224,224) ### crop blocks of 224 x 224 for green channel
b = blockfy(img[:,:,2],224,224) ### crop blocks of 224 x 224 for blue channel
for x in range(0,len(r)):
img = np.array((r[x],g[x],b[x])) ### combine each channel into one patch by patch
img = img.astype(np.uint8) ### cast back to proper integers
img_swap = img.swapaxes(0, 2) ### need to swap axes due to the way things were proceesed
img_swap_2 = img_swap.swapaxes(0, 1) ### do it again
Image.fromarray(img_swap_2).save(path_save_crop+str(x)+"bounding" + file,
format = 'jpeg',
subsampling=0,
quality=100) ### save patch with new name etc

NumPy random shuffle rows independently

I have the following array:
import numpy as np
a = np.array([[ 1, 2, 3],
[ 1, 2, 3],
[ 1, 2, 3]])
I understand that np.random.shuffle(a.T) will shuffle the array along the row, but what I need is for it to shuffe each row idependently. How can this be done in numpy? Speed is critical as there will be several million rows.
For this specific problem, each row will contain the same starting population.
import numpy as np
np.random.seed(2018)
def scramble(a, axis=-1):
"""
Return an array with the values of `a` independently shuffled along the
given axis
"""
b = a.swapaxes(axis, -1)
n = a.shape[axis]
idx = np.random.choice(n, n, replace=False)
b = b[..., idx]
return b.swapaxes(axis, -1)
a = a = np.arange(4*9).reshape(4, 9)
# array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8],
# [ 9, 10, 11, 12, 13, 14, 15, 16, 17],
# [18, 19, 20, 21, 22, 23, 24, 25, 26],
# [27, 28, 29, 30, 31, 32, 33, 34, 35]])
print(scramble(a, axis=1))
yields
[[ 3 8 7 0 4 5 1 2 6]
[12 17 16 9 13 14 10 11 15]
[21 26 25 18 22 23 19 20 24]
[30 35 34 27 31 32 28 29 33]]
while scrambling along the 0-axis:
print(scramble(a, axis=0))
yields
[[18 19 20 21 22 23 24 25 26]
[ 0 1 2 3 4 5 6 7 8]
[27 28 29 30 31 32 33 34 35]
[ 9 10 11 12 13 14 15 16 17]]
This works by first swapping the target axis with the last axis:
b = a.swapaxes(axis, -1)
This is a common trick used to standardize code which deals with one axis.
It reduces the general case to the specific case of dealing with the last axis.
Since in NumPy version 1.10 or higher swapaxes returns a view, there is no copying involved and so calling swapaxes is very quick.
Now we can generate a new index order for the last axis:
n = a.shape[axis]
idx = np.random.choice(n, n, replace=False)
Now we can shuffle b (independently along the last axis):
b = b[..., idx]
and then reverse the swapaxes to return an a-shaped result:
return b.swapaxes(axis, -1)
If you don't want a return value and want to operate on the array directly, you can specify the indices to shuffle.
>>> import numpy as np
>>>
>>>
>>> a = np.array([[1,2,3], [1,2,3], [1,2,3]])
>>>
>>> # Shuffle row `2` independently
>>> np.random.shuffle(a[2])
>>> a
array([[1, 2, 3],
[1, 2, 3],
[3, 2, 1]])
>>>
>>> # Shuffle column `0` independently
>>> np.random.shuffle(a[:,0])
>>> a
array([[3, 2, 3],
[1, 2, 3],
[1, 2, 1]])
If you want a return value as well, you can use numpy.random.permutation, in which case replace np.random.shuffle(a[n]) with a[n] = np.random.permutation(a[n]).
Warning, do not do a[n] = np.random.shuffle(a[n]). shuffle does not return anything, so the row/column you end up "shuffling" will be filled with nan instead.
Good answer above. But I will throw in a quick and dirty way:
a = np.array([[1,2,3], [1,2,3], [1,2,3]])
ignore_list_outpput = [np.random.shuffle(x) for x in a]
Then, a can be something like this
array([[2, 1, 3],
[4, 6, 5],
[9, 7, 8]])
Not very elegant but you can get this job done with just one short line.
Building on my comment to #Hun's answer, here's the fastest way to do this:
def shuffle_along(X):
"""Minimal in place independent-row shuffler."""
[np.random.shuffle(x) for x in X]
This works in-place and can only shuffle rows. If you need more options:
def shuffle_along(X, axis=0, inline=False):
"""More elaborate version of the above."""
if not inline:
X = X.copy()
if axis == 0:
[np.random.shuffle(x) for x in X]
if axis == 1:
[np.random.shuffle(x) for x in X.T]
if not inline:
return X
This, however, has the limitation of only working on 2d-arrays. For higher dimensional tensors, I would use:
def shuffle_along(X, axis=0, inline=True):
"""Shuffle along any axis of a tensor."""
if not inline:
X = X.copy()
np.apply_along_axis(np.random.shuffle, axis, X) # <-- I just changed this
if not inline:
return X
You can do it with numpy without any loop or extra function, and much more faster. E. g., we have an array of size (2, 6) and we want a sub array (2,2) with independent random index for each column.
import numpy as np
test = np.array([[1, 1],
[2, 2],
[0.5, 0.5],
[0.3, 0.3],
[4, 4],
[7, 7]])
id_rnd = np.random.randint(6, size=(2, 2)) # select random numbers, use choice and range if don want replacement.
new = np.take_along_axis(test, id_rnd, axis=0)
Out:
array([[2. , 2. ],
[0.5, 2. ]])
It works for any number of dimensions.
As of NumPy 1.20.0 released in January 2021 we have a permuted() method on the new Generator type (introduced with the new random API in NumPy 1.17.0, released in July 2019). This does exactly what you need:
import numpy as np
rng = np.random.default_rng()
a = np.array([
[1, 2, 3],
[1, 2, 3],
[1, 2, 3],
])
shuffled = rng.permuted(a, axis=1)
This gives you something like
>>> print(shuffled)
[[2 3 1]
[1 3 2]
[2 1 3]]
As you can see, the rows are permuted independently. This is in sharp contrast with both rng.permutation() and rng.shuffle().
If you want an in-place update you can pass the original array as the out keyword argument. And you can use the axis keyword argument to choose the direction along which to shuffle your array.

More numpy way of iterating through the 'orthogonal' diagonals of a 2D array

I have the following code that iterates along the diagonals that are orthogonal to the diagonals normally returned by np.diagonal. It starts at position (0, 0) and works its way towards the lower right coordinate.
The code works as intended but is not very numpy with all its loops and inefficient in having to create many arrays to do the trick.
So I wonder if there is a nicer way to do this, because I don't see how I would stride my array or use the diagonal-methods of numpy to do it in a nicer way (though I expect there are some tricks I fail to see).
import numpy as np
A = np.zeros((4,5))
#Construct a distance array of same size that uses (0, 0) as origo
#and evaluates distances along first and second dimensions slightly
#differently so that no values in the array is the same
D = np.zeros(A.shape)
for i in range(D.shape[0]):
for j in range(D.shape[1]):
D[i, j] = i * (1 + 1.0 / (grid_shape[0] + 1)) + j
print D
#[[ 0. 1. 2. 3. 4. ]
# [ 1.05882353 2.05882353 3.05882353 4.05882353 5.05882353]
# [ 2.11764706 3.11764706 4.11764706 5.11764706 6.11764706]
# [ 3.17647059 4.17647059 5.17647059 6.17647059 7.17647059]]
#Make a flat sorted copy
rD = D.ravel().copy()
rD.sort()
#Just to show how it works, assigning incrementing values
#iterating along the 'orthagonal' diagonals starting at (0, 0) position
for i, v in enumerate(rD):
A[D == v] = i
print A
#[[ 0 1 3 6 10]
# [ 2 4 7 11 14]
# [ 5 8 12 15 17]
# [ 9 13 16 18 19]]
Edit
To clarify, I want to iterate element-wise through the entire A but doing so in the order the code above invokes (which is displayed in the final print).
It is not important which direction the iteration goes along the diagonals (if 1 and 2 switched placed, and 3 and 5 etc. in A) only that the diagonals are orthogonal to the main diagonal of A (the one produced by np.diag(A)).
The application/reason for this question is in my previous question (in the solution part at the bottom of that question): Constructing a 2D grid from potentially incomplete list of candidates
Here is a way that avoids Python for-loops.
First, let's look at our addition tables:
import numpy as np
grid_shape = (4,5)
N = np.prod(grid_shape)
y = np.add.outer(np.arange(grid_shape[0]),np.arange(grid_shape[1]))
print(y)
# [[0 1 2 3 4]
# [1 2 3 4 5]
# [2 3 4 5 6]
# [3 4 5 6 7]]
The key idea is that if we visit the sums in the addition table in order, we would be iterating through the array in the desired order.
We can find out the indices associated with that order using np.argsort:
idx = np.argsort(y.ravel())
print(idx)
# [ 0 1 5 2 6 10 3 7 11 15 4 8 12 16 9 13 17 14 18 19]
idx is golden. It is essentially everything you need to iterate through any 2D array of shape (4,5), since a 2D array is just a 1D array reshaped.
If your ultimate goal is to generate the array A that you show above at the end of your post, then you could use argsort again:
print(np.argsort(idx).reshape(grid_shape[0],-1))
# [[ 0 1 3 6 10]
# [ 2 4 7 11 14]
# [ 5 8 12 15 17]
# [ 9 13 16 18 19]]
Or, alternatively, if you need to assign other values to A, perhaps this would be more useful:
A = np.zeros(grid_shape)
A1d = A.ravel()
A1d[idx] = np.arange(N) # you can change np.arange(N) to any 1D array of shape (N,)
print(A)
# [[ 0. 1. 3. 6. 10.]
# [ 2. 4. 7. 11. 15.]
# [ 5. 8. 12. 16. 18.]
# [ 9. 13. 14. 17. 19.]]
I know you asked for a way to iterate through your array, but I wanted to show the above because generating arrays through whole-array assignment or numpy function calls (like np.argsort) as done above will probably be faster than using a Python loop. But if you need to use a Python loop, then:
for i, j in enumerate(idx):
A1d[j] = i
print(A)
# [[ 0. 1. 3. 6. 10.]
# [ 2. 4. 7. 11. 15.]
# [ 5. 8. 12. 16. 18.]
# [ 9. 13. 14. 17. 19.]]
>>> D
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19]])
>>> D[::-1].diagonal(offset=1)
array([16, 12, 8, 4])
>>> D[::-1].diagonal(offset=-3)
array([0])
>>> np.hstack([D[::-1].diagonal(offset=-x) for x in np.arange(-4,4)])[::-1]
array([ 0, 1, 5, 2, 6, 10, 3, 7, 11, 15, 4, 8, 12, 16, 9, 13, 17,
14, 18, 19])
Simpler as long as it is not a large matrix.
I'm not sure if this is what you really want, but maybe:
>>> import numpy as np
>>> ar = np.random.random((4,4))
>>> ar
array([[ 0.04844116, 0.10543146, 0.30506354, 0.4813217 ],
[ 0.59962641, 0.44428831, 0.16629692, 0.65330539],
[ 0.61854927, 0.6385717 , 0.71615447, 0.13172049],
[ 0.05001291, 0.41577457, 0.5579213 , 0.7791656 ]])
>>> ar.diagonal()
array([ 0.04844116, 0.44428831, 0.71615447, 0.7791656 ])
>>> ar[::-1].diagonal()
array([ 0.05001291, 0.6385717 , 0.16629692, 0.4813217 ])
Edit
As a general solution, for arbitrarily shape arrays, you can use
import numpy as np
shape = tuple([np.random.randint(3,10) for i in range(2)])
ar = np.arange(np.prod(shape)).reshape(shape)
out = np.hstack([ar[::-1].diagonal(offset=x) \
for x in np.arange(-ar.shape[0]+1,ar.shape[1]-1)])
print ar
print out
giving, for example
[[ 0 1 2 3 4]
[ 5 6 7 8 9]
[10 11 12 13 14]
[15 16 17 18 19]
[20 21 22 23 24]]
[ 0 5 1 10 6 2 15 11 7 3 20 16 12 8 4 21 17 13 9 22 18 14 23 19]

Categories

Resources