I have a gray scale image with values between 0 (black) and white (255). I have a target matrix of the same size as the gray scale image. I need to start at a random pixel in the gray scale image and traverse through the image one pixel at a time (in a depth-first search manner), copying its value to the corresponding location in the target matrix. I obviously need to do this only for the non-white pixels. How can I do this? I thought that I could get the connected components of the gray scale image and traverse each pixel one by one, but I couldn't find any suitable implementation of connected components. Any ideas?
For example, if my gray scale image is:
[[255,255,255,255,255,255,255]
[255,255, 0 ,10 ,255,255, 1 ]
[255,30 ,255,255,50 ,255, 9 ]
[51 ,20 ,255,255, 9 ,255,240]
[255,255,80 ,50 ,170,255, 20]
[255,255,255,255,255,255, 0 ]
[255,255,255,255,255,255, 69]]
Then a possible traversal is [0,10,50,9,170,50,80,20,51,30] followed by [1,9,240,20,0,69] to give [0,10,50,9,170,50,80,20,51,30,1,9,240,20,0,69]. The order between the different objects doesn't matter.
Other possible traversals are:
[1,9,240,20,0,69,0,10,50,9,170,50,80,20,51,30] or [1,9,240,20,0,69,0,10,50,9,170,50,80,20,30,51] or
[1,9,240,20,0,69,10,50,9,170,50,80,20,30,0,51]
etc.
You can use networkx:
from itertools import product, repeat
import numpy as np
import networkx as nx
arr = np.array(
[[255,255,255,255,255,255,255],
[255,255, 0 ,10 ,255,255, 1 ],
[255,30 ,255,255,50 ,255, 9 ],
[51 ,20 ,255,255, 9 ,255,240],
[255,255,80 ,50 ,170,255, 20],
[255,255,255,255,255,255, 0 ],
[255,255,255,255,255,255, 69]])
# generate edges
shift = list(product(*repeat([-1, 0, 1], 2)))
x_max, y_max = arr.shape
edges = []
for x, y in np.ndindex(arr.shape):
for x_delta, y_delta in shift:
x_neighb = x + x_delta
y_neighb = y + y_delta
if (0 <= x_neighb < x_max) and (0 <= y_neighb < y_max):
edge = (x, y), (x_neighb, y_neighb)
edges.append(edge)
# build graph
G = nx.from_edgelist(edges)
# draw graph
pos = {(x, y): (y, x_max-x) for x, y in G.nodes()}
nx.draw(G, with_labels=True, pos=pos, node_color='coral', node_size=1000)
# draw graph with numbers
labels = dict(np.ndenumerate(arr))
node_color = ['coral' if labels[n] == 255 else 'lightgrey' for n in G.nodes()]
nx.draw(G, with_labels=True, pos=pos, labels=labels, node_color=node_color, node_size=1000)
# build subgraph
select = np.argwhere(arr < 255)
G1 = G.subgraph(map(tuple, select))
# draw subgraph
pos = {(x, y): (y, x_max-x) for x, y in G1.nodes()}
labels1 = {n:labels[n] for n in G1.nodes()}
nx.draw(G1, with_labels=True, pos=pos, labels=labels1, node_color='lightgrey', node_size=1000)
# find connected components and DFS trees
for i in nx.connected_components(G1):
source = next(iter(i))
idx = nx.dfs_tree(G1, source=source)
print(arr[tuple(np.array(idx).T)])
Output:
[ 0 10 50 9 50 80 20 30 51 170]
[ 9 1 240 20 0 69]
So after so much researches for suitable implementation of connected components, I came up with my solution. In order to reach the best I can do in terms of performance, I relied on these rules:
Not to use networkx because it's slow according to this benchmark
Use vectorized actions as much as possible because Python based iterations are slow according to this answer.
I'm implementing an algorithm of connected components of image here only because I believe this is an essential part of this question.
Algorithm of connected components of image
import numpy as np
import numexpr as ne
import pandas as pd
import igraph
def get_coords(arr):
x, y = np.indices(arr.shape)
mask = arr != 255
return np.array([x[mask], y[mask]]).T
def compare(r1, r2):
#assuming r1 is a sorted array, returns:
# 1) locations of r2 items in r1
# 2) mask array of these locations
idx = np.searchsorted(r1, r2)
idx[idx == len(r1)] = 0
mask = r1[idx] == r2
return idx, mask
def get_reduction(coords, s):
d = {'s': s, 'c0': coords[:,0], 'c1': coords[:,1]}
return ne.evaluate('c0*s+c1', d)
def get_bounds(coords, increment):
return np.max(coords[1]) + 1 + increment
def get_shift_intersections(coords, shifts):
# instance that consists of neighbours found for each node [[0,1,2],...]
s = get_bounds(coords, 10)
rdim = get_reduction(coords, s)
shift_mask, shift_idx = [], []
for sh in shifts:
sh_rdim = get_reduction(coords + sh, s)
sh_idx, sh_mask = compare(rdim, sh_rdim)
shift_idx.append(sh_idx)
shift_mask.append(sh_mask)
return np.array(shift_idx).T, np.array(shift_mask).T,
def connected_components(coords, shifts):
shift_idx, shift_mask = get_shift_intersections(coords, shifts)
x, y = np.indices((len(shift_idx), len(shift_idx[0])))
vertices = np.arange(len(coords))
edges = np.array([x[shift_mask], shift_idx[shift_mask]]).T
graph = igraph.Graph()
graph.add_vertices(vertices)
graph.add_edges(edges)
graph_tags = graph.clusters().membership
values = pd.DataFrame(graph_tags).groupby([0]).indices
return values
coords = get_coords(arr)
shifts=((0,1),(1,0),(1,1),(-1,1))
comps = connected_components(coords, shifts=shifts)
for c in comps:
print(coords[comps[c]].tolist())
Outcome
[[1, 2], [1, 3], [2, 1], [2, 4], [3, 0], [3, 1], [3, 4], [4, 2], [4, 3], [4, 4]]
[[1, 6], [2, 6], [3, 6], [4, 6], [5, 6], [6, 6]]
Explanation
Algorithm consists of these steps:
We need to convert image to coordinates of non-white cells. It can be done using function:
def get_coords(arr):
x, y = np.indices(arr.shape)
mask = arr != 255
return np.array([y[mask], x[mask]]).T
I'll name an outputting array by X for clarity. Here is an output of this array, visually:
Next, we need to consider all the cells of each shift that intersects with X:
In order to do that, we should solve a problem of intersections I posted few days before. I found it quite difficult to do using multidimensional numpy arrays. Thanks to Divakar, he proposes a nice way of dimensionality reduction using numexpr package which fastens operations even more than numpy. I implement it here in this function:
def get_reduction(coords, s):
d = {'s': s, 'c0': coords[:,0], 'c1': coords[:,1]}
return ne.evaluate('c0*s+c1', d)
In order to make it working, we should set a bound s which can be calculated automatically using a function
def get_bounds(coords, increment):
return np.max(coords[1]) + 1 + increment
or inputted manually. Since algorithm requires increasing coordinates, pairs
of coordinates might be out of bounds, therefore I have used a slight increment here. Finally, as a solution to my post I mentioned here, indexes of coordinates of X (reduced to 1D), that intersects with any other array of coordinates Y (also reduced to 1D) can be accessed via function
def compare(r1, r2):
# assuming r1 is a sorted array, returns:
# 1) locations of r2 items in r1
# 2) mask array of these locations
idx = np.searchsorted(r1, r2)
idx[idx == len(r1)] = 0
mask = r1[idx] == r2
return idx, mask
Plugging all the corresponding arrays of shifts. As we can see, abovementioned function outputs two variables: an array of index locations in the main set X and its mask array. A proper indexes can be found using idx[mask] and since this procedure is being applied for each shift, I implemented get_shift_intersections(coords, shifts) method for this case.
Final: constructing nodes & edges and taking output from igraph. The point here is that igraph performs well only with nodes that are consecutive integers starting from 0. That's why my script was designed to use mask-based access to item locations in X. I'll explain briefly how did I use igraph here:
I have calculated coordinate pairs:
[[1, 2], [1, 3], [1, 6], [2, 1], [2, 4], [2, 6], [3, 0], [3, 1], [3, 4], [3, 6], [4, 2], [4, 3], [4, 4], [4, 6], [5, 6], [6, 6]]
Then I assigned integers for them:
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15]
My edges looks like this:
[[0, 1], [1, 4], [2, 5], [3, 7], [3, 0], [4, 8], [5, 9], [6, 7], [6, 3], [7, 10], [8, 12], [9, 13], [10, 11], [11, 12], [11, 8], [13, 14], [14, 15]]
Output of graph.clusters().membership looks like this:
[0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1]
And finally, I have used groupby method of Pandas to find indexes of separate groups (I use Pandas here because I found it to be the most efficient way of grouping in Python)
Notes
Download of igraph is not straightforward, you might need to install it from unofficial binaries.
Related
Is there a way to slice a 2d array in numpy into smaller 2d arrays?
Example
[[1,2,3,4], -> [[1,2] [3,4]
[5,6,7,8]] [5,6] [7,8]]
So I basically want to cut down a 2x4 array into 2 2x2 arrays. Looking for a generic solution to be used on images.
There was another question a couple of months ago which clued me in to the idea of using reshape and swapaxes. The h//nrows makes sense since this keeps the first block's rows together. It also makes sense that you'll need nrows and ncols to be part of the shape. -1 tells reshape to fill in whatever number is necessary to make the reshape valid. Armed with the form of the solution, I just tried things until I found the formula that works.
You should be able to break your array into "blocks" using some combination of reshape and swapaxes:
def blockshaped(arr, nrows, ncols):
"""
Return an array of shape (n, nrows, ncols) where
n * nrows * ncols = arr.size
If arr is a 2D array, the returned array should look like n subblocks with
each subblock preserving the "physical" layout of arr.
"""
h, w = arr.shape
assert h % nrows == 0, f"{h} rows is not evenly divisible by {nrows}"
assert w % ncols == 0, f"{w} cols is not evenly divisible by {ncols}"
return (arr.reshape(h//nrows, nrows, -1, ncols)
.swapaxes(1,2)
.reshape(-1, nrows, ncols))
turns c
np.random.seed(365)
c = np.arange(24).reshape((4, 6))
print(c)
[out]:
[[ 0 1 2 3 4 5]
[ 6 7 8 9 10 11]
[12 13 14 15 16 17]
[18 19 20 21 22 23]]
into
print(blockshaped(c, 2, 3))
[out]:
[[[ 0 1 2]
[ 6 7 8]]
[[ 3 4 5]
[ 9 10 11]]
[[12 13 14]
[18 19 20]]
[[15 16 17]
[21 22 23]]]
I've posted an inverse function, unblockshaped, here, and an N-dimensional generalization here. The generalization gives a little more insight into the reasoning behind this algorithm.
Note that there is also superbatfish's
blockwise_view. It arranges the
blocks in a different format (using more axes) but it has the advantage of (1)
always returning a view and (2) being capable of handling arrays of any
dimension.
It seems to me that this is a task for numpy.split or some variant.
e.g.
a = np.arange(30).reshape([5,6]) #a.shape = (5,6)
a1 = np.split(a,3,axis=1)
#'a1' is a list of 3 arrays of shape (5,2)
a2 = np.split(a, [2,4])
#'a2' is a list of three arrays of shape (2,5), (2,5), (1,5)
If you have a NxN image you can create, e.g., a list of 2 NxN/2 subimages, and then divide them along the other axis.
numpy.hsplit and numpy.vsplit are also available.
There are some other answers that seem well-suited for your specific case already, but your question piqued my interest in the possibility of a memory-efficient solution usable up to the maximum number of dimensions that numpy supports, and I ended up spending most of the afternoon coming up with possible method. (The method itself is relatively simple, it's just that I still haven't used most of the really fancy features that numpy supports so most of the time was spent researching to see what numpy had available and how much it could do so that I didn't have to do it.)
def blockgen(array, bpa):
"""Creates a generator that yields multidimensional blocks from the given
array(_like); bpa is an array_like consisting of the number of blocks per axis
(minimum of 1, must be a divisor of the corresponding axis size of array). As
the blocks are selected using normal numpy slicing, they will be views rather
than copies; this is good for very large multidimensional arrays that are being
blocked, and for very large blocks, but it also means that the result must be
copied if it is to be modified (unless modifying the original data as well is
intended)."""
bpa = np.asarray(bpa) # in case bpa wasn't already an ndarray
# parameter checking
if array.ndim != bpa.size: # bpa doesn't match array dimensionality
raise ValueError("Size of bpa must be equal to the array dimensionality.")
if (bpa.dtype != np.int # bpa must be all integers
or (bpa < 1).any() # all values in bpa must be >= 1
or (array.shape % bpa).any()): # % != 0 means not evenly divisible
raise ValueError("bpa ({0}) must consist of nonzero positive integers "
"that evenly divide the corresponding array axis "
"size".format(bpa))
# generate block edge indices
rgen = (np.r_[:array.shape[i]+1:array.shape[i]//blk_n]
for i, blk_n in enumerate(bpa))
# build slice sequences for each axis (unfortunately broadcasting
# can't be used to make the items easy to operate over
c = [[np.s_[i:j] for i, j in zip(r[:-1], r[1:])] for r in rgen]
# Now to get the blocks; this is slightly less efficient than it could be
# because numpy doesn't like jagged arrays and I didn't feel like writing
# a ufunc for it.
for idxs in np.ndindex(*bpa):
blockbounds = tuple(c[j][idxs[j]] for j in range(bpa.size))
yield array[blockbounds]
You question practically the same as this one. You can use the one-liner with np.ndindex() and reshape():
def cutter(a, r, c):
lenr = a.shape[0]/r
lenc = a.shape[1]/c
np.array([a[i*r:(i+1)*r,j*c:(j+1)*c] for (i,j) in np.ndindex(lenr,lenc)]).reshape(lenr,lenc,r,c)
To create the result you want:
a = np.arange(1,9).reshape(2,1)
#array([[1, 2, 3, 4],
# [5, 6, 7, 8]])
cutter( a, 1, 2 )
#array([[[[1, 2]],
# [[3, 4]]],
# [[[5, 6]],
# [[7, 8]]]])
Some minor enhancement to TheMeaningfulEngineer's answer that handles the case when the big 2d array cannot be perfectly sliced into equally sized subarrays
def blockfy(a, p, q):
'''
Divides array a into subarrays of size p-by-q
p: block row size
q: block column size
'''
m = a.shape[0] #image row size
n = a.shape[1] #image column size
# pad array with NaNs so it can be divided by p row-wise and by q column-wise
bpr = ((m-1)//p + 1) #blocks per row
bpc = ((n-1)//q + 1) #blocks per column
M = p * bpr
N = q * bpc
A = np.nan* np.ones([M,N])
A[:a.shape[0],:a.shape[1]] = a
block_list = []
previous_row = 0
for row_block in range(bpc):
previous_row = row_block * p
previous_column = 0
for column_block in range(bpr):
previous_column = column_block * q
block = A[previous_row:previous_row+p, previous_column:previous_column+q]
# remove nan columns and nan rows
nan_cols = np.all(np.isnan(block), axis=0)
block = block[:, ~nan_cols]
nan_rows = np.all(np.isnan(block), axis=1)
block = block[~nan_rows, :]
## append
if block.size:
block_list.append(block)
return block_list
Examples:
a = np.arange(25)
a = a.reshape((5,5))
out = blockfy(a, 2, 3)
a->
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24]])
out[0] ->
array([[0., 1., 2.],
[5., 6., 7.]])
out[1]->
array([[3., 4.],
[8., 9.]])
out[-1]->
array([[23., 24.]])
For now it just works when the big 2d array can be perfectly sliced into equally sized subarrays.
The code bellow slices
a ->array([[ 0, 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10, 11],
[12, 13, 14, 15, 16, 17],
[18, 19, 20, 21, 22, 23]])
into this
block_array->
array([[[ 0, 1, 2],
[ 6, 7, 8]],
[[ 3, 4, 5],
[ 9, 10, 11]],
[[12, 13, 14],
[18, 19, 20]],
[[15, 16, 17],
[21, 22, 23]]])
p ang q determine the block size
Code
a = arange(24)
a = a.reshape((4,6))
m = a.shape[0] #image row size
n = a.shape[1] #image column size
p = 2 #block row size
q = 3 #block column size
block_array = []
previous_row = 0
for row_block in range(blocks_per_row):
previous_row = row_block * p
previous_column = 0
for column_block in range(blocks_per_column):
previous_column = column_block * q
block = a[previous_row:previous_row+p,previous_column:previous_column+q]
block_array.append(block)
block_array = array(block_array)
If you want a solution that also handles the cases when the matrix is
not equally divided, you can use this:
from operator import add
half_split = np.array_split(input, 2)
res = map(lambda x: np.array_split(x, 2, axis=1), half_split)
res = reduce(add, res)
Here is a solution based on unutbu's answer that handle case where matrix cannot be equally divided. In this case, it will resize the matrix before using some interpolation. You need OpenCV for this. Note that I had to swap ncols and nrows to make it works, didn't figured why.
import numpy as np
import cv2
import math
def blockshaped(arr, r_nbrs, c_nbrs, interp=cv2.INTER_LINEAR):
"""
arr a 2D array, typically an image
r_nbrs numbers of rows
r_cols numbers of cols
"""
arr_h, arr_w = arr.shape
size_w = int( math.floor(arr_w // c_nbrs) * c_nbrs )
size_h = int( math.floor(arr_h // r_nbrs) * r_nbrs )
if size_w != arr_w or size_h != arr_h:
arr = cv2.resize(arr, (size_w, size_h), interpolation=interp)
nrows = int(size_w // r_nbrs)
ncols = int(size_h // c_nbrs)
return (arr.reshape(r_nbrs, ncols, -1, nrows)
.swapaxes(1,2)
.reshape(-1, ncols, nrows))
a = np.random.randint(1, 9, size=(9,9))
out = [np.hsplit(x, 3) for x in np.vsplit(a,3)]
print(a)
print(out)
yields
[[7 6 2 4 4 2 5 2 3]
[2 3 7 6 8 8 2 6 2]
[4 1 3 1 3 8 1 3 7]
[6 1 1 5 7 2 1 5 8]
[8 8 7 6 6 1 8 8 4]
[6 1 8 2 1 4 5 1 8]
[7 3 4 2 5 6 1 2 7]
[4 6 7 5 8 2 8 2 8]
[6 6 5 5 6 1 2 6 4]]
[[array([[7, 6, 2],
[2, 3, 7],
[4, 1, 3]]), array([[4, 4, 2],
[6, 8, 8],
[1, 3, 8]]), array([[5, 2, 3],
[2, 6, 2],
[1, 3, 7]])], [array([[6, 1, 1],
[8, 8, 7],
[6, 1, 8]]), array([[5, 7, 2],
[6, 6, 1],
[2, 1, 4]]), array([[1, 5, 8],
[8, 8, 4],
[5, 1, 8]])], [array([[7, 3, 4],
[4, 6, 7],
[6, 6, 5]]), array([[2, 5, 6],
[5, 8, 2],
[5, 6, 1]]), array([[1, 2, 7],
[8, 2, 8],
[2, 6, 4]])]]
I publish my solution. Notice that this code doesn't' actually create copies of original array, so it works well with big data. Moreover, it doesn't crash if array cannot be divided evenly (but you can easly add condition for that by deleting ceil and checking if v_slices and h_slices are divided without rest).
import numpy as np
from math import ceil
a = np.arange(9).reshape(3, 3)
p, q = 2, 2
width, height = a.shape
v_slices = ceil(width / p)
h_slices = ceil(height / q)
for h in range(h_slices):
for v in range(v_slices):
block = a[h * p : h * p + p, v * q : v * q + q]
# do something with a block
This code changes (or, more precisely, gives you direct access to part of an array) this:
[[0 1 2]
[3 4 5]
[6 7 8]]
Into this:
[[0 1]
[3 4]]
[[2]
[5]]
[[6 7]]
[[8]]
If you need actual copies, Aenaon code is what you are looking for.
If you are sure that big array can be divided evenly, you can use numpy splitting tools.
to add to #Aenaon answer and his blockfy function, if you are working with COLOR IMAGES/ 3D ARRAY here is my pipeline to create crops of 224 x 224 for 3 channel input
def blockfy(a, p, q):
'''
Divides array a into subarrays of size p-by-q
p: block row size
q: block column size
'''
m = a.shape[0] #image row size
n = a.shape[1] #image column size
# pad array with NaNs so it can be divided by p row-wise and by q column-wise
bpr = ((m-1)//p + 1) #blocks per row
bpc = ((n-1)//q + 1) #blocks per column
M = p * bpr
N = q * bpc
A = np.nan* np.ones([M,N])
A[:a.shape[0],:a.shape[1]] = a
block_list = []
previous_row = 0
for row_block in range(bpc):
previous_row = row_block * p
previous_column = 0
for column_block in range(bpr):
previous_column = column_block * q
block = A[previous_row:previous_row+p, previous_column:previous_column+q]
# remove nan columns and nan rows
nan_cols = np.all(np.isnan(block), axis=0)
block = block[:, ~nan_cols]
nan_rows = np.all(np.isnan(block), axis=1)
block = block[~nan_rows, :]
## append
if block.size:
block_list.append(block)
return block_list
then extended above to
for file in os.listdir(path_to_crop): ### list files in your folder
img = io.imread(path_to_crop + file, as_gray=False) ### open image
r = blockfy(img[:,:,0],224,224) ### crop blocks of 224 x 224 for red channel
g = blockfy(img[:,:,1],224,224) ### crop blocks of 224 x 224 for green channel
b = blockfy(img[:,:,2],224,224) ### crop blocks of 224 x 224 for blue channel
for x in range(0,len(r)):
img = np.array((r[x],g[x],b[x])) ### combine each channel into one patch by patch
img = img.astype(np.uint8) ### cast back to proper integers
img_swap = img.swapaxes(0, 2) ### need to swap axes due to the way things were proceesed
img_swap_2 = img_swap.swapaxes(0, 1) ### do it again
Image.fromarray(img_swap_2).save(path_save_crop+str(x)+"bounding" + file,
format = 'jpeg',
subsampling=0,
quality=100) ### save patch with new name etc
Is there a way to slice a 2d array in numpy into smaller 2d arrays?
Example
[[1,2,3,4], -> [[1,2] [3,4]
[5,6,7,8]] [5,6] [7,8]]
So I basically want to cut down a 2x4 array into 2 2x2 arrays. Looking for a generic solution to be used on images.
There was another question a couple of months ago which clued me in to the idea of using reshape and swapaxes. The h//nrows makes sense since this keeps the first block's rows together. It also makes sense that you'll need nrows and ncols to be part of the shape. -1 tells reshape to fill in whatever number is necessary to make the reshape valid. Armed with the form of the solution, I just tried things until I found the formula that works.
You should be able to break your array into "blocks" using some combination of reshape and swapaxes:
def blockshaped(arr, nrows, ncols):
"""
Return an array of shape (n, nrows, ncols) where
n * nrows * ncols = arr.size
If arr is a 2D array, the returned array should look like n subblocks with
each subblock preserving the "physical" layout of arr.
"""
h, w = arr.shape
assert h % nrows == 0, f"{h} rows is not evenly divisible by {nrows}"
assert w % ncols == 0, f"{w} cols is not evenly divisible by {ncols}"
return (arr.reshape(h//nrows, nrows, -1, ncols)
.swapaxes(1,2)
.reshape(-1, nrows, ncols))
turns c
np.random.seed(365)
c = np.arange(24).reshape((4, 6))
print(c)
[out]:
[[ 0 1 2 3 4 5]
[ 6 7 8 9 10 11]
[12 13 14 15 16 17]
[18 19 20 21 22 23]]
into
print(blockshaped(c, 2, 3))
[out]:
[[[ 0 1 2]
[ 6 7 8]]
[[ 3 4 5]
[ 9 10 11]]
[[12 13 14]
[18 19 20]]
[[15 16 17]
[21 22 23]]]
I've posted an inverse function, unblockshaped, here, and an N-dimensional generalization here. The generalization gives a little more insight into the reasoning behind this algorithm.
Note that there is also superbatfish's
blockwise_view. It arranges the
blocks in a different format (using more axes) but it has the advantage of (1)
always returning a view and (2) being capable of handling arrays of any
dimension.
It seems to me that this is a task for numpy.split or some variant.
e.g.
a = np.arange(30).reshape([5,6]) #a.shape = (5,6)
a1 = np.split(a,3,axis=1)
#'a1' is a list of 3 arrays of shape (5,2)
a2 = np.split(a, [2,4])
#'a2' is a list of three arrays of shape (2,5), (2,5), (1,5)
If you have a NxN image you can create, e.g., a list of 2 NxN/2 subimages, and then divide them along the other axis.
numpy.hsplit and numpy.vsplit are also available.
There are some other answers that seem well-suited for your specific case already, but your question piqued my interest in the possibility of a memory-efficient solution usable up to the maximum number of dimensions that numpy supports, and I ended up spending most of the afternoon coming up with possible method. (The method itself is relatively simple, it's just that I still haven't used most of the really fancy features that numpy supports so most of the time was spent researching to see what numpy had available and how much it could do so that I didn't have to do it.)
def blockgen(array, bpa):
"""Creates a generator that yields multidimensional blocks from the given
array(_like); bpa is an array_like consisting of the number of blocks per axis
(minimum of 1, must be a divisor of the corresponding axis size of array). As
the blocks are selected using normal numpy slicing, they will be views rather
than copies; this is good for very large multidimensional arrays that are being
blocked, and for very large blocks, but it also means that the result must be
copied if it is to be modified (unless modifying the original data as well is
intended)."""
bpa = np.asarray(bpa) # in case bpa wasn't already an ndarray
# parameter checking
if array.ndim != bpa.size: # bpa doesn't match array dimensionality
raise ValueError("Size of bpa must be equal to the array dimensionality.")
if (bpa.dtype != np.int # bpa must be all integers
or (bpa < 1).any() # all values in bpa must be >= 1
or (array.shape % bpa).any()): # % != 0 means not evenly divisible
raise ValueError("bpa ({0}) must consist of nonzero positive integers "
"that evenly divide the corresponding array axis "
"size".format(bpa))
# generate block edge indices
rgen = (np.r_[:array.shape[i]+1:array.shape[i]//blk_n]
for i, blk_n in enumerate(bpa))
# build slice sequences for each axis (unfortunately broadcasting
# can't be used to make the items easy to operate over
c = [[np.s_[i:j] for i, j in zip(r[:-1], r[1:])] for r in rgen]
# Now to get the blocks; this is slightly less efficient than it could be
# because numpy doesn't like jagged arrays and I didn't feel like writing
# a ufunc for it.
for idxs in np.ndindex(*bpa):
blockbounds = tuple(c[j][idxs[j]] for j in range(bpa.size))
yield array[blockbounds]
You question practically the same as this one. You can use the one-liner with np.ndindex() and reshape():
def cutter(a, r, c):
lenr = a.shape[0]/r
lenc = a.shape[1]/c
np.array([a[i*r:(i+1)*r,j*c:(j+1)*c] for (i,j) in np.ndindex(lenr,lenc)]).reshape(lenr,lenc,r,c)
To create the result you want:
a = np.arange(1,9).reshape(2,1)
#array([[1, 2, 3, 4],
# [5, 6, 7, 8]])
cutter( a, 1, 2 )
#array([[[[1, 2]],
# [[3, 4]]],
# [[[5, 6]],
# [[7, 8]]]])
Some minor enhancement to TheMeaningfulEngineer's answer that handles the case when the big 2d array cannot be perfectly sliced into equally sized subarrays
def blockfy(a, p, q):
'''
Divides array a into subarrays of size p-by-q
p: block row size
q: block column size
'''
m = a.shape[0] #image row size
n = a.shape[1] #image column size
# pad array with NaNs so it can be divided by p row-wise and by q column-wise
bpr = ((m-1)//p + 1) #blocks per row
bpc = ((n-1)//q + 1) #blocks per column
M = p * bpr
N = q * bpc
A = np.nan* np.ones([M,N])
A[:a.shape[0],:a.shape[1]] = a
block_list = []
previous_row = 0
for row_block in range(bpc):
previous_row = row_block * p
previous_column = 0
for column_block in range(bpr):
previous_column = column_block * q
block = A[previous_row:previous_row+p, previous_column:previous_column+q]
# remove nan columns and nan rows
nan_cols = np.all(np.isnan(block), axis=0)
block = block[:, ~nan_cols]
nan_rows = np.all(np.isnan(block), axis=1)
block = block[~nan_rows, :]
## append
if block.size:
block_list.append(block)
return block_list
Examples:
a = np.arange(25)
a = a.reshape((5,5))
out = blockfy(a, 2, 3)
a->
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24]])
out[0] ->
array([[0., 1., 2.],
[5., 6., 7.]])
out[1]->
array([[3., 4.],
[8., 9.]])
out[-1]->
array([[23., 24.]])
For now it just works when the big 2d array can be perfectly sliced into equally sized subarrays.
The code bellow slices
a ->array([[ 0, 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10, 11],
[12, 13, 14, 15, 16, 17],
[18, 19, 20, 21, 22, 23]])
into this
block_array->
array([[[ 0, 1, 2],
[ 6, 7, 8]],
[[ 3, 4, 5],
[ 9, 10, 11]],
[[12, 13, 14],
[18, 19, 20]],
[[15, 16, 17],
[21, 22, 23]]])
p ang q determine the block size
Code
a = arange(24)
a = a.reshape((4,6))
m = a.shape[0] #image row size
n = a.shape[1] #image column size
p = 2 #block row size
q = 3 #block column size
block_array = []
previous_row = 0
for row_block in range(blocks_per_row):
previous_row = row_block * p
previous_column = 0
for column_block in range(blocks_per_column):
previous_column = column_block * q
block = a[previous_row:previous_row+p,previous_column:previous_column+q]
block_array.append(block)
block_array = array(block_array)
If you want a solution that also handles the cases when the matrix is
not equally divided, you can use this:
from operator import add
half_split = np.array_split(input, 2)
res = map(lambda x: np.array_split(x, 2, axis=1), half_split)
res = reduce(add, res)
Here is a solution based on unutbu's answer that handle case where matrix cannot be equally divided. In this case, it will resize the matrix before using some interpolation. You need OpenCV for this. Note that I had to swap ncols and nrows to make it works, didn't figured why.
import numpy as np
import cv2
import math
def blockshaped(arr, r_nbrs, c_nbrs, interp=cv2.INTER_LINEAR):
"""
arr a 2D array, typically an image
r_nbrs numbers of rows
r_cols numbers of cols
"""
arr_h, arr_w = arr.shape
size_w = int( math.floor(arr_w // c_nbrs) * c_nbrs )
size_h = int( math.floor(arr_h // r_nbrs) * r_nbrs )
if size_w != arr_w or size_h != arr_h:
arr = cv2.resize(arr, (size_w, size_h), interpolation=interp)
nrows = int(size_w // r_nbrs)
ncols = int(size_h // c_nbrs)
return (arr.reshape(r_nbrs, ncols, -1, nrows)
.swapaxes(1,2)
.reshape(-1, ncols, nrows))
a = np.random.randint(1, 9, size=(9,9))
out = [np.hsplit(x, 3) for x in np.vsplit(a,3)]
print(a)
print(out)
yields
[[7 6 2 4 4 2 5 2 3]
[2 3 7 6 8 8 2 6 2]
[4 1 3 1 3 8 1 3 7]
[6 1 1 5 7 2 1 5 8]
[8 8 7 6 6 1 8 8 4]
[6 1 8 2 1 4 5 1 8]
[7 3 4 2 5 6 1 2 7]
[4 6 7 5 8 2 8 2 8]
[6 6 5 5 6 1 2 6 4]]
[[array([[7, 6, 2],
[2, 3, 7],
[4, 1, 3]]), array([[4, 4, 2],
[6, 8, 8],
[1, 3, 8]]), array([[5, 2, 3],
[2, 6, 2],
[1, 3, 7]])], [array([[6, 1, 1],
[8, 8, 7],
[6, 1, 8]]), array([[5, 7, 2],
[6, 6, 1],
[2, 1, 4]]), array([[1, 5, 8],
[8, 8, 4],
[5, 1, 8]])], [array([[7, 3, 4],
[4, 6, 7],
[6, 6, 5]]), array([[2, 5, 6],
[5, 8, 2],
[5, 6, 1]]), array([[1, 2, 7],
[8, 2, 8],
[2, 6, 4]])]]
I publish my solution. Notice that this code doesn't' actually create copies of original array, so it works well with big data. Moreover, it doesn't crash if array cannot be divided evenly (but you can easly add condition for that by deleting ceil and checking if v_slices and h_slices are divided without rest).
import numpy as np
from math import ceil
a = np.arange(9).reshape(3, 3)
p, q = 2, 2
width, height = a.shape
v_slices = ceil(width / p)
h_slices = ceil(height / q)
for h in range(h_slices):
for v in range(v_slices):
block = a[h * p : h * p + p, v * q : v * q + q]
# do something with a block
This code changes (or, more precisely, gives you direct access to part of an array) this:
[[0 1 2]
[3 4 5]
[6 7 8]]
Into this:
[[0 1]
[3 4]]
[[2]
[5]]
[[6 7]]
[[8]]
If you need actual copies, Aenaon code is what you are looking for.
If you are sure that big array can be divided evenly, you can use numpy splitting tools.
to add to #Aenaon answer and his blockfy function, if you are working with COLOR IMAGES/ 3D ARRAY here is my pipeline to create crops of 224 x 224 for 3 channel input
def blockfy(a, p, q):
'''
Divides array a into subarrays of size p-by-q
p: block row size
q: block column size
'''
m = a.shape[0] #image row size
n = a.shape[1] #image column size
# pad array with NaNs so it can be divided by p row-wise and by q column-wise
bpr = ((m-1)//p + 1) #blocks per row
bpc = ((n-1)//q + 1) #blocks per column
M = p * bpr
N = q * bpc
A = np.nan* np.ones([M,N])
A[:a.shape[0],:a.shape[1]] = a
block_list = []
previous_row = 0
for row_block in range(bpc):
previous_row = row_block * p
previous_column = 0
for column_block in range(bpr):
previous_column = column_block * q
block = A[previous_row:previous_row+p, previous_column:previous_column+q]
# remove nan columns and nan rows
nan_cols = np.all(np.isnan(block), axis=0)
block = block[:, ~nan_cols]
nan_rows = np.all(np.isnan(block), axis=1)
block = block[~nan_rows, :]
## append
if block.size:
block_list.append(block)
return block_list
then extended above to
for file in os.listdir(path_to_crop): ### list files in your folder
img = io.imread(path_to_crop + file, as_gray=False) ### open image
r = blockfy(img[:,:,0],224,224) ### crop blocks of 224 x 224 for red channel
g = blockfy(img[:,:,1],224,224) ### crop blocks of 224 x 224 for green channel
b = blockfy(img[:,:,2],224,224) ### crop blocks of 224 x 224 for blue channel
for x in range(0,len(r)):
img = np.array((r[x],g[x],b[x])) ### combine each channel into one patch by patch
img = img.astype(np.uint8) ### cast back to proper integers
img_swap = img.swapaxes(0, 2) ### need to swap axes due to the way things were proceesed
img_swap_2 = img_swap.swapaxes(0, 1) ### do it again
Image.fromarray(img_swap_2).save(path_save_crop+str(x)+"bounding" + file,
format = 'jpeg',
subsampling=0,
quality=100) ### save patch with new name etc
I want to run a relatively simple random draw in numpy, but I can't find a good way to express it.
I think the best way is to describe it as drawing from an urn without replacement. I have an urn with k colors, and n_k balls of every color. I want to draw m balls, and know how many balls of every color I have.
My current attempt it
np.bincount(np.random.permutation(np.repeat(np.arange(k), n_k))[:m], minlength=k)
here, n_k is an array of length k with the counts of the balls.
It seems that's equivalent to
np.bincount(np.random.choice(k, m, n_k / n_k.sum(), minlength=k)
which is a bit better, but still not great.
What you want is an implementation of the multivariate hypergeometric distribution.
I don't know of one in numpy or scipy, but it might already exist out there somewhere.
I contributed an implementation of the multivariate hypergeometric distribution to numpy 1.18.0; see numpy.random.Generator.multivariate_hypergeometric.
For example, to draw 15 samples from an urn containing 12 red, 4 green and 18 blue marbles, and repeat the process 10 times:
In [4]: import numpy as np
In [5]: rng = np.random.default_rng()
In [6]: colors = [12, 4, 18]
In [7]: rng.multivariate_hypergeometric(colors, 15, size=10)
Out[7]:
array([[ 5, 4, 6],
[ 3, 3, 9],
[ 6, 2, 7],
[ 7, 2, 6],
[ 3, 0, 12],
[ 5, 2, 8],
[ 6, 2, 7],
[ 7, 1, 7],
[ 8, 1, 6],
[ 6, 1, 8]])
The rest of this answer is now obsolete, but I'll leave for posterity (whatever that means...).
You can implement it using repeated calls to numpy.random.hypergeometric. Whether that will be more efficient than your implementation depends on how many colors there are and how many balls of each color.
For example, here's a script that prints the result of drawing from an urn containing three colors (red, green and blue):
from __future__ import print_function
import numpy as np
nred = 12
ngreen = 4
nblue = 18
m = 15
red = np.random.hypergeometric(nred, ngreen + nblue, m)
green = np.random.hypergeometric(ngreen, nblue, m - red)
blue = m - (red + green)
print("red: %2i" % red)
print("green: %2i" % green)
print("blue: %2i" % blue)
Sample output:
red: 6
green: 1
blue: 8
The following function generalizes that to choosing m balls given an array colors holding the number of each color:
def sample(m, colors):
"""
Parameters
----------
m : number balls to draw from the urn
colors : one-dimensional array of number balls of each color in the urn
Returns
-------
One-dimensional array with the same length as `colors` containing the
number of balls of each color in a random sample.
"""
remaining = np.cumsum(colors[::-1])[::-1]
result = np.zeros(len(colors), dtype=np.int)
for i in range(len(colors)-1):
if m < 1:
break
result[i] = np.random.hypergeometric(colors[i], remaining[i+1], m)
m -= result[i]
result[-1] = m
return result
For example,
>>> sample(10, [2, 4, 8, 16])
array([2, 3, 1, 4])
The following should work:
def make_sampling_arr(n_k):
out = [ x for s in [ [i] * n_k[i] for i in range(len(n_k)) ] for x in s ]
return out
np.random.choice(make_sampling_arr(n_k), m)
I have the following array:
import numpy as np
a = np.array([[ 1, 2, 3],
[ 1, 2, 3],
[ 1, 2, 3]])
I understand that np.random.shuffle(a.T) will shuffle the array along the row, but what I need is for it to shuffe each row idependently. How can this be done in numpy? Speed is critical as there will be several million rows.
For this specific problem, each row will contain the same starting population.
import numpy as np
np.random.seed(2018)
def scramble(a, axis=-1):
"""
Return an array with the values of `a` independently shuffled along the
given axis
"""
b = a.swapaxes(axis, -1)
n = a.shape[axis]
idx = np.random.choice(n, n, replace=False)
b = b[..., idx]
return b.swapaxes(axis, -1)
a = a = np.arange(4*9).reshape(4, 9)
# array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8],
# [ 9, 10, 11, 12, 13, 14, 15, 16, 17],
# [18, 19, 20, 21, 22, 23, 24, 25, 26],
# [27, 28, 29, 30, 31, 32, 33, 34, 35]])
print(scramble(a, axis=1))
yields
[[ 3 8 7 0 4 5 1 2 6]
[12 17 16 9 13 14 10 11 15]
[21 26 25 18 22 23 19 20 24]
[30 35 34 27 31 32 28 29 33]]
while scrambling along the 0-axis:
print(scramble(a, axis=0))
yields
[[18 19 20 21 22 23 24 25 26]
[ 0 1 2 3 4 5 6 7 8]
[27 28 29 30 31 32 33 34 35]
[ 9 10 11 12 13 14 15 16 17]]
This works by first swapping the target axis with the last axis:
b = a.swapaxes(axis, -1)
This is a common trick used to standardize code which deals with one axis.
It reduces the general case to the specific case of dealing with the last axis.
Since in NumPy version 1.10 or higher swapaxes returns a view, there is no copying involved and so calling swapaxes is very quick.
Now we can generate a new index order for the last axis:
n = a.shape[axis]
idx = np.random.choice(n, n, replace=False)
Now we can shuffle b (independently along the last axis):
b = b[..., idx]
and then reverse the swapaxes to return an a-shaped result:
return b.swapaxes(axis, -1)
If you don't want a return value and want to operate on the array directly, you can specify the indices to shuffle.
>>> import numpy as np
>>>
>>>
>>> a = np.array([[1,2,3], [1,2,3], [1,2,3]])
>>>
>>> # Shuffle row `2` independently
>>> np.random.shuffle(a[2])
>>> a
array([[1, 2, 3],
[1, 2, 3],
[3, 2, 1]])
>>>
>>> # Shuffle column `0` independently
>>> np.random.shuffle(a[:,0])
>>> a
array([[3, 2, 3],
[1, 2, 3],
[1, 2, 1]])
If you want a return value as well, you can use numpy.random.permutation, in which case replace np.random.shuffle(a[n]) with a[n] = np.random.permutation(a[n]).
Warning, do not do a[n] = np.random.shuffle(a[n]). shuffle does not return anything, so the row/column you end up "shuffling" will be filled with nan instead.
Good answer above. But I will throw in a quick and dirty way:
a = np.array([[1,2,3], [1,2,3], [1,2,3]])
ignore_list_outpput = [np.random.shuffle(x) for x in a]
Then, a can be something like this
array([[2, 1, 3],
[4, 6, 5],
[9, 7, 8]])
Not very elegant but you can get this job done with just one short line.
Building on my comment to #Hun's answer, here's the fastest way to do this:
def shuffle_along(X):
"""Minimal in place independent-row shuffler."""
[np.random.shuffle(x) for x in X]
This works in-place and can only shuffle rows. If you need more options:
def shuffle_along(X, axis=0, inline=False):
"""More elaborate version of the above."""
if not inline:
X = X.copy()
if axis == 0:
[np.random.shuffle(x) for x in X]
if axis == 1:
[np.random.shuffle(x) for x in X.T]
if not inline:
return X
This, however, has the limitation of only working on 2d-arrays. For higher dimensional tensors, I would use:
def shuffle_along(X, axis=0, inline=True):
"""Shuffle along any axis of a tensor."""
if not inline:
X = X.copy()
np.apply_along_axis(np.random.shuffle, axis, X) # <-- I just changed this
if not inline:
return X
You can do it with numpy without any loop or extra function, and much more faster. E. g., we have an array of size (2, 6) and we want a sub array (2,2) with independent random index for each column.
import numpy as np
test = np.array([[1, 1],
[2, 2],
[0.5, 0.5],
[0.3, 0.3],
[4, 4],
[7, 7]])
id_rnd = np.random.randint(6, size=(2, 2)) # select random numbers, use choice and range if don want replacement.
new = np.take_along_axis(test, id_rnd, axis=0)
Out:
array([[2. , 2. ],
[0.5, 2. ]])
It works for any number of dimensions.
As of NumPy 1.20.0 released in January 2021 we have a permuted() method on the new Generator type (introduced with the new random API in NumPy 1.17.0, released in July 2019). This does exactly what you need:
import numpy as np
rng = np.random.default_rng()
a = np.array([
[1, 2, 3],
[1, 2, 3],
[1, 2, 3],
])
shuffled = rng.permuted(a, axis=1)
This gives you something like
>>> print(shuffled)
[[2 3 1]
[1 3 2]
[2 1 3]]
As you can see, the rows are permuted independently. This is in sharp contrast with both rng.permutation() and rng.shuffle().
If you want an in-place update you can pass the original array as the out keyword argument. And you can use the axis keyword argument to choose the direction along which to shuffle your array.
I'm trying to convert my MATLAB code to python but I'm having some issues. This code is supposed to segment letters from a picture.
Here's the whole code in MATLAB:
he = imread('r.jpg');
imshow(he);
%C = makecform(type) creates the color transformation structure C that defines the color space conversion specified by type.
cform = makecform('srgb2lab');
%To perform the transformation, pass the color transformation structure as an argument to the applycform function.
lab_he = applycform(he,cform);
%convert to double precision
ab = double(lab_he(:,:,2:3));
%size of dimension in 2D array
nrows = size(ab,1);
ncols = size(ab,2);
%B = reshape(A,sz1,...,szN) reshapes A into a sz1-by-...-by-szN array where
%sz1,...,szN indicates the size of each dimension. You can specify a single
% dimension size of [] to have the dimension size automatically calculated,
% such that the number of elements in B matches the number of elements in A.
% For example, if A is a 10-by-10 matrix, then reshape(A,2,2,[]) reshapes
% the 100 elements of A into a 2-by-2-by-25 array.
ab = reshape(ab,nrows*ncols,2);
% repeat the clustering 3 times to avoid local minima
nColors = 3;
[cluster_idx, cluster_center] = kmeans(ab,nColors,'distance','sqEuclidean', ...
'Replicates',3);
pixel_labels = reshape(cluster_idx,nrows,ncols);
imshow(pixel_labels,[]), title('image labeled by cluster index');
segmented_images = cell(1,3);
rgb_label = repmat(pixel_labels,[1 1 3]);
for k = 1:nColors
color = he;
color(rgb_label ~= k) = 0;
segmented_images{k} = color;
end
figure,imshow(segmented_images{1}), title('objects in cluster 1');
figure,imshow(segmented_images{2}), title('objects in cluster 2');
figure,imshow(segmented_images{3}), title('objects in cluster 3');
mean_cluster_value = mean(cluster_center,2);
[tmp, idx] = sort(mean_cluster_value);
blue_cluster_num = idx(1);
L = lab_he(:,:,1);
blue_idx = find(pixel_labels == blue_cluster_num);
L_blue = L(blue_idx);
is_light_blue = im2bw(L_blue,graythresh(L_blue));
% target_labels = repmat(uint8(0),[nrows ncols]);
% target_labels(blue_idx(is_light_blue==false)) = 1;
% target_labels = repmat(target_labels,[1 1 3]);
% blue_target = he;
% blue_target(target_labels ~= 1) = 0;
% figure,imshow(blue_target), title('blue');
Here's what I have in Python so far:
import cv2
import numpy as np
from matplotlib import pyplot as plt
import sys
img = cv2.imread('r.jpg',1)
print "original image: ", img.shape
cv2.imshow('BGR', img)
img1 = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img2 = cv2.cvtColor(img1, cv2.COLOR_RGB2LAB)
cv2.imshow('RGB', img1)
cv2.imshow('LAB', img2) #differs from the LAB color space in MATLAB (need to patch maybe?)
print "LAB converted image: ", img2.shape
print "LAB converted image dimension", img2.ndim #says the image is a 3 dimensional array
img2a = img2.shape[2][1:2]
print "Slicing the LAB converted image", img2a
#we need to convert that to double precision
print img2.dtype
img2a = img2.astype(np.uint64) #convert to double precision
print img2a.dtype
#print img2a
row = img2a.shape[0] #returns number of rows of img2a
column = img2a.shape[1] #returns number of columns of img2a
print "row: ", row #matches the MATLAB version
print "column: ", column #matchees the MATLAB version
rowcol = row * column
k = cv2.waitKey(0)
if k == 27: # wait for ESC key to exit
cv2.destroyAllWindows()
elif k == ord('s'): # wait for 's' key to save and exit
cv2.imwrite('final image',final_image)
cv2.destroyAllWindows()
Now the part i'm currently stuck in is that here in Matlab code, I have lab_he(:,:,2:3) which means all the rows and all the columns from depth 2 and 3. However when I try to replicate that in Python img2a = img2.shape[2][1:2] but it doesn't work or makes sense. Please help.
In Octave/MATLAB
octave:29> x=reshape(1:(2*3*4),3,2,4);
octave:30> x(:,:,2:3)
ans =
ans(:,:,1) =
7 10
8 11
9 12
ans(:,:,2) =
13 16
14 17
15 18
octave:31> size(x(:,:,2:3))
ans =
3 2 2
octave:33> x(:,:,2:3)(2,2,:)
ans(:,:,1) = 11
ans(:,:,2) = 17
In numpy:
In [13]: x=np.arange(1,1+2*3*4).reshape(3,2,4,order='F')
In [14]: x[:,:,1:3]
Out[14]:
array([[[ 7, 13],
[10, 16]],
[[ 8, 14],
[11, 17]],
[[ 9, 15],
[12, 18]]])
In [15]: _.shape
Out[15]: (3, 2, 2)
In [17]: x[:,:,1:3][1,1,:]
Out[17]: array([11, 17])
Or with numpy normal 'C' ordering, and indexing on the 1st dimension
In [18]: y=np.arange(1,1+2*3*4).reshape(4,2,3)
In [19]: y[1:3,:,:]
Out[19]:
array([[[ 7, 8, 9],
[10, 11, 12]],
[[13, 14, 15],
[16, 17, 18]]])
In [20]: y[1:3,:,:][:,1,1]
Out[20]: array([11, 17])
Same indexing ideas, though matching numbers and shapes requires some care, not only with the 0 v 1 index base. A 3d array is displayed in a different arangement. Octave divides it into blocks on the last index (its primary iterator), numpy iterates on the first index.
In 3d it makes more sense to talk about first, 2nd, 3rd dimensions rather than row,col,depth. In 4d you run out of names. :)
I had to reshape array at specific depth, and I programmed a little recursive function to do so:
def recursive_array_cutting(tab, depth, i, min, max):
if(i==depth):
tab = tab[min:max]
return tab
temp_list = []
nb_subtab = len(tab)
for index in range(nb_subtab):
temp_list.append(recursive_array_cutting(tab[index], depth, i+1, min, max))
return np.asanyarray(temp_list)
It allow to get all array/values between the min and the max of a specific depth, for instance, if you have a (3, 4) tab = [[0, 1, 2, 3], [0, 1, 2, 3], [0, 1, 2, 3]] and only want the last two values of the deepest array, you call like this : tab = recursive_array_cutting(tab, 1, 0, 0, 2) to get the output : [[0 1][0 1][0 1]].
If you have a more complexe array like this tab = [[[0, 1, 2, 3], [1, 1, 2, 3], [2, 1, 2, 3]], [[0, 1, 2, 3], [1, 1, 2, 3], [2, 1, 2, 3]], [[0, 1, 2, 3], [1, 1, 2, 3], [2, 1, 2, 3]]] (3, 3, 4) and want a (3, 2, 4) array, you can call like this : tab = recursive_array_cutting(tab, 1, 0, 0, 2) to get this output, and get rid of the last dimension in depth 1.
Function like this surely exist in numpy, but I did not found it.