Efficient sampling without replacement from a multiset [duplicate] - python

I want to run a relatively simple random draw in numpy, but I can't find a good way to express it.
I think the best way is to describe it as drawing from an urn without replacement. I have an urn with k colors, and n_k balls of every color. I want to draw m balls, and know how many balls of every color I have.
My current attempt it
np.bincount(np.random.permutation(np.repeat(np.arange(k), n_k))[:m], minlength=k)
here, n_k is an array of length k with the counts of the balls.
It seems that's equivalent to
np.bincount(np.random.choice(k, m, n_k / n_k.sum(), minlength=k)
which is a bit better, but still not great.

What you want is an implementation of the multivariate hypergeometric distribution.
I don't know of one in numpy or scipy, but it might already exist out there somewhere.
I contributed an implementation of the multivariate hypergeometric distribution to numpy 1.18.0; see numpy.random.Generator.multivariate_hypergeometric.
For example, to draw 15 samples from an urn containing 12 red, 4 green and 18 blue marbles, and repeat the process 10 times:
In [4]: import numpy as np
In [5]: rng = np.random.default_rng()
In [6]: colors = [12, 4, 18]
In [7]: rng.multivariate_hypergeometric(colors, 15, size=10)
Out[7]:
array([[ 5, 4, 6],
[ 3, 3, 9],
[ 6, 2, 7],
[ 7, 2, 6],
[ 3, 0, 12],
[ 5, 2, 8],
[ 6, 2, 7],
[ 7, 1, 7],
[ 8, 1, 6],
[ 6, 1, 8]])
The rest of this answer is now obsolete, but I'll leave for posterity (whatever that means...).
You can implement it using repeated calls to numpy.random.hypergeometric. Whether that will be more efficient than your implementation depends on how many colors there are and how many balls of each color.
For example, here's a script that prints the result of drawing from an urn containing three colors (red, green and blue):
from __future__ import print_function
import numpy as np
nred = 12
ngreen = 4
nblue = 18
m = 15
red = np.random.hypergeometric(nred, ngreen + nblue, m)
green = np.random.hypergeometric(ngreen, nblue, m - red)
blue = m - (red + green)
print("red: %2i" % red)
print("green: %2i" % green)
print("blue: %2i" % blue)
Sample output:
red: 6
green: 1
blue: 8
The following function generalizes that to choosing m balls given an array colors holding the number of each color:
def sample(m, colors):
"""
Parameters
----------
m : number balls to draw from the urn
colors : one-dimensional array of number balls of each color in the urn
Returns
-------
One-dimensional array with the same length as `colors` containing the
number of balls of each color in a random sample.
"""
remaining = np.cumsum(colors[::-1])[::-1]
result = np.zeros(len(colors), dtype=np.int)
for i in range(len(colors)-1):
if m < 1:
break
result[i] = np.random.hypergeometric(colors[i], remaining[i+1], m)
m -= result[i]
result[-1] = m
return result
For example,
>>> sample(10, [2, 4, 8, 16])
array([2, 3, 1, 4])

The following should work:
def make_sampling_arr(n_k):
out = [ x for s in [ [i] * n_k[i] for i in range(len(n_k)) ] for x in s ]
return out
np.random.choice(make_sampling_arr(n_k), m)

Related

Find the number of clusters in a list of integers

Let's consider the distance d(a, b) = number of digits which are pairwise different in a and b, e.g.:
d(1003000000, 1000090000) = 2 # the 4th and 6th digits don't match
(we only work with 10-digit numbers) and this list:
L = [2678888873,
2678878873, # distance 1 from L[0]
1000000000,
1000040000, # distance 1 from L[2]
1000300000, # distance 1 from L[2], distance 2 from L[3]
1000300009, # distance 1 from L[4], distance 2 from L[2]
]
I would like to find the minimal number of points P such that each integer in the list is at a distance <= 1 of a point in P.
Here I think this number is 3: every number in the list is at distance <= 1 of 2678888873, 1000000000, or 1000300009.
I imagine an O(n^2) algorithm is possible by first computing a distance matrix i.e. M[i, j] = d(L[i], L[j]).
Is there a better way to do this, especially using Numpy? (maybe there's a built-in algorithm in Numpy/Scipy?)
PS: If we see these 10-digit integers as strings, we're close to finding a minimal number of clusters in a list of many words with a Levenshtein distance.
PS2: I know realize this distance has a name on strings: Hamming distance.
Let's see what we know from a the distance metric. Given a number P (not necessarily in L), if two members of L are within distance 1 of P, they each share 9 digits with P, but not necessarily the same ones, so they are only guaranteed to share 8 digits with each other. So any two numbers that have distance 2 are guaranteed to two unique Ps that are distance 1 from each of them (and distance 2 from each other as well). You can use this information to reduce the amount of brute-force effort required to optimize the selection of P.
Let's say you have a distance matrix. You can immediately discard rows (or columns) that don't have entries less than 3: they are their own cluster automatically. For the remaining entries that are equal to 2, construct a list of possible P values. Find the number of elements of L that are within 1 of each element of P (another distance matrix). Sort P by the number of neighbors, and select. You will need to update the matrix at each iteration as you remove members with maximal neighbors to avoid inefficient grouping due to overlap (members of L that are near multiple members of P).
You can compute a distance matrix for L in numpy by first converting it to a 2D array of digits:
L = np.array([2678888873, 2678878873, 1000000000, 1000040000, 1000300000, 1000300009])
z = 10 # Number of digits
n = len(L) # Number of numbers
dec = 10**np.arange(z).reshape(-1, 1).astype(np.int64)
digits = (L // dec) % 10
digits is now a 10xN array:
array([[3, 3, 0, 0, 0, 9],
[7, 7, 0, 0, 0, 0],
[8, 8, 0, 0, 0, 0],
[8, 8, 0, 0, 0, 0],
[8, 7, 0, 4, 0, 0],
[8, 8, 0, 0, 3, 3],
[8, 8, 0, 0, 0, 0],
[7, 7, 0, 0, 0, 0],
[6, 6, 0, 0, 0, 0],
[2, 2, 1, 1, 1, 1]], dtype=int64)
You can compute the distance between digits and itself, or digits and any other 10xM array using != and sum along the right axis:
distance = (digits[:, None, :] != digits[..., None]).sum(axis=0)
The result:
array([[ 0, 1, 10, 10, 10, 10],
[ 1, 0, 10, 10, 10, 10],
[10, 10, 0, 1, 1, 2],
[10, 10, 1, 0, 2, 3],
[10, 10, 1, 2, 0, 1],
[10, 10, 2, 3, 1, 0]])
We are only concerned with the upper (or lower) triangle of that matrix, so we can immediately mask out the other triangle:
distance[np.tril_indices(n)] = z + 1
Find all candidate values of P: all elements of L, but also all pairs between elements that have distance 2:
# Find indices of pairs that differ by 2
indices = np.nonzero(distance == 2)
# Extract those numbers as 10xKx2 array
d = digits[:, np.stack(indices, axis=1)]
# Compute where the difference is nonzero (Kx2)
locs = np.diff(d, axis=2).astype(bool).squeeze()
# Find the index of the first digit to replace (K)
s = np.argmax(locs, axis=0)
The extra values of P are constructed from each half of d, with the digits represented by k replaced from the other half:
P0 = digits[:, indices[0]]
P1 = digits[:, indices[1]]
k = np.arange(s.size)
tmp = P0[s, k]
P0[s, k] = P1[s, k]
P1[s, k] = tmp
Pextra = np.unique(np.concatenate((P0, P1), axis=1)
So now you can compute the total set of possibilities for P:
P = np.concatenate((digits, Pextra), axis=1)
distance2 = (P[:, None, :] != digits[..., None]).sum(axis=0)
You can discard any elements of Pextra that match with elements of digits based on the distance:
mask = np.concatenate((np.ones(n, bool), distance2[:, n:].all(axis=0)))
P = P[:, mask]
distance2 = distance2[:, mask]
Now you can iteratively distance P with L, and select the best values of P, removing any values that have been selected from the distance matrix. A greedy selection from P will not necessarily be optimal, since an alternative combination may require fewer elements due to overlaps, but that is a matter for a simple (but somewhat expensive) graph traversal algorithm. The following snippet just shows a simple greedy selection, which will work fine for your toy example:
distMask = distance2 <= 1
quality = distMask.sum(axis=0)
clusters = []
accounted = 0
while accounted < n:
# Get the cluster location
best = np.argmax(quality)
# Get the cluster number
clusters.append(P[:, best].dot(dec).item())
# Remove numbers in cluser from consideration
accounted += quality[best]
quality -= distMask[distMask[:, best], :].sum(axis=0)
The last couple of steps can be optimized using sets and graphs, but this shows a starting point for a valid approach. This is going to be slow for large data, but probably not prohibitively so. Do some benchmarks to decide how much time you want to spend optimizing vs just running the algorithm.

Python iterate through connected components in grayscale image

I have a gray scale image with values between 0 (black) and white (255). I have a target matrix of the same size as the gray scale image. I need to start at a random pixel in the gray scale image and traverse through the image one pixel at a time (in a depth-first search manner), copying its value to the corresponding location in the target matrix. I obviously need to do this only for the non-white pixels. How can I do this? I thought that I could get the connected components of the gray scale image and traverse each pixel one by one, but I couldn't find any suitable implementation of connected components. Any ideas?
For example, if my gray scale image is:
[[255,255,255,255,255,255,255]
[255,255, 0 ,10 ,255,255, 1 ]
[255,30 ,255,255,50 ,255, 9 ]
[51 ,20 ,255,255, 9 ,255,240]
[255,255,80 ,50 ,170,255, 20]
[255,255,255,255,255,255, 0 ]
[255,255,255,255,255,255, 69]]
Then a possible traversal is [0,10,50,9,170,50,80,20,51,30] followed by [1,9,240,20,0,69] to give [0,10,50,9,170,50,80,20,51,30,1,9,240,20,0,69]. The order between the different objects doesn't matter.
Other possible traversals are:
[1,9,240,20,0,69,0,10,50,9,170,50,80,20,51,30] or [1,9,240,20,0,69,0,10,50,9,170,50,80,20,30,51] or
[1,9,240,20,0,69,10,50,9,170,50,80,20,30,0,51]
etc.
You can use networkx:
from itertools import product, repeat
import numpy as np
import networkx as nx
arr = np.array(
[[255,255,255,255,255,255,255],
[255,255, 0 ,10 ,255,255, 1 ],
[255,30 ,255,255,50 ,255, 9 ],
[51 ,20 ,255,255, 9 ,255,240],
[255,255,80 ,50 ,170,255, 20],
[255,255,255,255,255,255, 0 ],
[255,255,255,255,255,255, 69]])
# generate edges
shift = list(product(*repeat([-1, 0, 1], 2)))
x_max, y_max = arr.shape
edges = []
for x, y in np.ndindex(arr.shape):
for x_delta, y_delta in shift:
x_neighb = x + x_delta
y_neighb = y + y_delta
if (0 <= x_neighb < x_max) and (0 <= y_neighb < y_max):
edge = (x, y), (x_neighb, y_neighb)
edges.append(edge)
# build graph
G = nx.from_edgelist(edges)
# draw graph
pos = {(x, y): (y, x_max-x) for x, y in G.nodes()}
nx.draw(G, with_labels=True, pos=pos, node_color='coral', node_size=1000)
# draw graph with numbers
labels = dict(np.ndenumerate(arr))
node_color = ['coral' if labels[n] == 255 else 'lightgrey' for n in G.nodes()]
nx.draw(G, with_labels=True, pos=pos, labels=labels, node_color=node_color, node_size=1000)
# build subgraph
select = np.argwhere(arr < 255)
G1 = G.subgraph(map(tuple, select))
# draw subgraph
pos = {(x, y): (y, x_max-x) for x, y in G1.nodes()}
labels1 = {n:labels[n] for n in G1.nodes()}
nx.draw(G1, with_labels=True, pos=pos, labels=labels1, node_color='lightgrey', node_size=1000)
# find connected components and DFS trees
for i in nx.connected_components(G1):
source = next(iter(i))
idx = nx.dfs_tree(G1, source=source)
print(arr[tuple(np.array(idx).T)])
Output:
[ 0 10 50 9 50 80 20 30 51 170]
[ 9 1 240 20 0 69]
So after so much researches for suitable implementation of connected components, I came up with my solution. In order to reach the best I can do in terms of performance, I relied on these rules:
Not to use networkx because it's slow according to this benchmark
Use vectorized actions as much as possible because Python based iterations are slow according to this answer.
I'm implementing an algorithm of connected components of image here only because I believe this is an essential part of this question.
Algorithm of connected components of image
import numpy as np
import numexpr as ne
import pandas as pd
import igraph
def get_coords(arr):
x, y = np.indices(arr.shape)
mask = arr != 255
return np.array([x[mask], y[mask]]).T
def compare(r1, r2):
#assuming r1 is a sorted array, returns:
# 1) locations of r2 items in r1
# 2) mask array of these locations
idx = np.searchsorted(r1, r2)
idx[idx == len(r1)] = 0
mask = r1[idx] == r2
return idx, mask
def get_reduction(coords, s):
d = {'s': s, 'c0': coords[:,0], 'c1': coords[:,1]}
return ne.evaluate('c0*s+c1', d)
def get_bounds(coords, increment):
return np.max(coords[1]) + 1 + increment
def get_shift_intersections(coords, shifts):
# instance that consists of neighbours found for each node [[0,1,2],...]
s = get_bounds(coords, 10)
rdim = get_reduction(coords, s)
shift_mask, shift_idx = [], []
for sh in shifts:
sh_rdim = get_reduction(coords + sh, s)
sh_idx, sh_mask = compare(rdim, sh_rdim)
shift_idx.append(sh_idx)
shift_mask.append(sh_mask)
return np.array(shift_idx).T, np.array(shift_mask).T,
def connected_components(coords, shifts):
shift_idx, shift_mask = get_shift_intersections(coords, shifts)
x, y = np.indices((len(shift_idx), len(shift_idx[0])))
vertices = np.arange(len(coords))
edges = np.array([x[shift_mask], shift_idx[shift_mask]]).T
graph = igraph.Graph()
graph.add_vertices(vertices)
graph.add_edges(edges)
graph_tags = graph.clusters().membership
values = pd.DataFrame(graph_tags).groupby([0]).indices
return values
coords = get_coords(arr)
shifts=((0,1),(1,0),(1,1),(-1,1))
comps = connected_components(coords, shifts=shifts)
for c in comps:
print(coords[comps[c]].tolist())
Outcome
[[1, 2], [1, 3], [2, 1], [2, 4], [3, 0], [3, 1], [3, 4], [4, 2], [4, 3], [4, 4]]
[[1, 6], [2, 6], [3, 6], [4, 6], [5, 6], [6, 6]]
Explanation
Algorithm consists of these steps:
We need to convert image to coordinates of non-white cells. It can be done using function:
def get_coords(arr):
x, y = np.indices(arr.shape)
mask = arr != 255
return np.array([y[mask], x[mask]]).T
I'll name an outputting array by X for clarity. Here is an output of this array, visually:
Next, we need to consider all the cells of each shift that intersects with X:
In order to do that, we should solve a problem of intersections I posted few days before. I found it quite difficult to do using multidimensional numpy arrays. Thanks to Divakar, he proposes a nice way of dimensionality reduction using numexpr package which fastens operations even more than numpy. I implement it here in this function:
def get_reduction(coords, s):
d = {'s': s, 'c0': coords[:,0], 'c1': coords[:,1]}
return ne.evaluate('c0*s+c1', d)
In order to make it working, we should set a bound s which can be calculated automatically using a function
def get_bounds(coords, increment):
return np.max(coords[1]) + 1 + increment
or inputted manually. Since algorithm requires increasing coordinates, pairs
of coordinates might be out of bounds, therefore I have used a slight increment here. Finally, as a solution to my post I mentioned here, indexes of coordinates of X (reduced to 1D), that intersects with any other array of coordinates Y (also reduced to 1D) can be accessed via function
def compare(r1, r2):
# assuming r1 is a sorted array, returns:
# 1) locations of r2 items in r1
# 2) mask array of these locations
idx = np.searchsorted(r1, r2)
idx[idx == len(r1)] = 0
mask = r1[idx] == r2
return idx, mask
Plugging all the corresponding arrays of shifts. As we can see, abovementioned function outputs two variables: an array of index locations in the main set X and its mask array. A proper indexes can be found using idx[mask] and since this procedure is being applied for each shift, I implemented get_shift_intersections(coords, shifts) method for this case.
Final: constructing nodes & edges and taking output from igraph. The point here is that igraph performs well only with nodes that are consecutive integers starting from 0. That's why my script was designed to use mask-based access to item locations in X. I'll explain briefly how did I use igraph here:
I have calculated coordinate pairs:
[[1, 2], [1, 3], [1, 6], [2, 1], [2, 4], [2, 6], [3, 0], [3, 1], [3, 4], [3, 6], [4, 2], [4, 3], [4, 4], [4, 6], [5, 6], [6, 6]]
Then I assigned integers for them:
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15]
My edges looks like this:
[[0, 1], [1, 4], [2, 5], [3, 7], [3, 0], [4, 8], [5, 9], [6, 7], [6, 3], [7, 10], [8, 12], [9, 13], [10, 11], [11, 12], [11, 8], [13, 14], [14, 15]]
Output of graph.clusters().membership looks like this:
[0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1]
And finally, I have used groupby method of Pandas to find indexes of separate groups (I use Pandas here because I found it to be the most efficient way of grouping in Python)
Notes
Download of igraph is not straightforward, you might need to install it from unofficial binaries.

How to calculate moving average of NumPy array with varying window sizes defined by an array of indices?

Which is the most pythonic way to average the values in a 2d array (axis=1) based on a range in a 1d array?
I am trying to average arrays of environmental variables (my 2d array) based on every 2 degrees of latitude (my id array). I have a latitude array that goes from -33.9 to 29.5. I'd like to average the environmental variables within every 2 degrees from -34 to 30.
The number of elements within each 2 degrees may be different, for example:
arr = array([[5,3,4,5,6,4,2,4,5,8],
[4,5,8,5,2,3,6,4,1,7],
[8,3,5,8,5,2,5,9,9,4]])
idx = array([1,1,1,2,2,3,3,3,3,4])
I would then average the values in arr based on idx[0:3], idx[3:9], idx[9].
I would like to get a result of:
arrAvg = array([4,4.2,8],
[6.3,3.5,7],
[5.3,6.3,4])
#Andyk already explained in his post how to calculate the average having a list of indices.
I will provide a solution for getting those indices.
Here is a general approach:
from typing import Optional
import numpy as np
def get_split_indices(array: np.ndarray,
*,
window_size: int,
start_value: Optional[int] = None) -> np.ndarray:
"""
:param array: input array with consequent integer indices
:param window_size: specifies range of indices
which will be included in a separate window
:param start_value: from which the window will start
:return: array of indices marking the borders of the windows
"""
if start_value is None:
start_value = array[0]
diff = np.diff(array)
diff_indices = np.where(diff)[0] + 1
slice_ = slice(window_size - 1 - (array[0] - start_value) % window_size,
None,
window_size)
return diff_indices[slice_]
Examples of usage:
Checking it with your example data:
# indices: 3 9
idx = np.array([1,1,1, 2,2,3,3,3,3, 4])
you can get the indices separating different windows like this:
get_split_indices(idx,
window_size=2,
start_value=0)
>>> array([3, 9])
With this function you can also specify different window sizes:
# indices: 7 11 17
idx = np.array([0,1,1,2,2,3,3, 4,5,6,7, 8,9,10,11,11,11, 12,13])
get_split_indices(idx,
window_size=4,
start_value=0)
>>> array([ 7, 11, 17])
and different starting values:
# indices: 1 7 10 13 18
idx = np.array([0, 1,1,2,2,3,3, 4,5,6, 7,8,9, 10,11,11,11,12, 13])
get_split_indices(idx,
window_size=3,
start_value=-2)
>>> array([ 1, 7, 10, 13, 18])
Note that I made the first element of array a starting value by default.
You could use the np.hsplit function. For your example of indices 0:3, 3:9, 9 it goes like this:
np.hsplit(arr, [3, 9])
which gives you a list of arrays:
[array([[5, 3, 4],
[4, 5, 8],
[8, 3, 5]]),
array([[5, 6, 4, 2, 4, 5],
[5, 2, 3, 6, 4, 1],
[8, 5, 2, 5, 9, 9]]),
array([[8],
[7],
[4]])]
Then you can compute the mean as follows:
m = [np.mean(a, axis=1) for a in np.hsplit(arr, [3, 9])]
And convert it back to an array:
np.vstack(m).T

How to sum up (W * H) of 3D matrix and store it in 1D matrix with length=depth(third dimension of input matrix)

I want to sum up all elements (W * H) of 3D matrix and store it in 1D matrix with length=depth(third dimension of input matrix)
To make myself clear:
Input dimension = 1D in the form of (W * H * D).
Required output = 1D again with length=D
let's consider below 3D Matrix : 2 x 3 x 2.
Layer 1 Layer 2
[1, 2, 3 [7, 8, 9
4, 5, 6] 10, 11, 12]
output is 1D : [21, 57]
I am new to python and wrote like this:
def test(w, h, c, image_inp):
output = [image_inp[j * w + k] for i in enumerate(image_inp)
for j in range(0,h)
for k in range(0,w)
#image_inp[j * w + k] comment
]
printout(output)
I know this will copy the input array as it is to output array.
also output array length is not equal to Depth.
Some one please help me in getting this right
def test(w, h, c, image_inp):
output = [hwsum for i in enumerate(image_inp)
hwsum += wsum for j in range(0,h)
wsum += image_inp[j*w + k] for k in range(0,w)
#image_inp[j * w + k]
]
print "Calling outprint"
printout(output)
Note: I do not want to use numpy(with this it is working) or any math libraries.
reason being I am writing test code in python to evaluate a working on language.
EDIT:
input matrix will be entering the test function as 1D with w, h, c as arguments,
so it takes the form as:
[1,2,3,4,5,6,7,8,9,10,12],
with w, h, c have to compute considering input1D as 3D matrix.
thanks
Numpy is very suitable for slicing and manipulating single and multiple dimensional data. It is fast, easy to use and very "pythonic".
Following your example, you can just do:
>>> import numpy
>>> img3d=numpy.array([[[1,2,3],[4,5,6]],[[7,8,9],[10,12,12]]])
>>> img3d.shape
(2, 2, 3)
You can see here that img3d has 2 layers, 2 rows and 3 columns. You can just slice using indexing like this:
>>> img3d[0,:,:]
array([[1, 2, 3],
[4, 5, 6]])
To go from 3D to 1D, just use numpy.flatten():
>>> f=img3d.flatten()
>>> f
array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 12])
And reversed, use numpy.reshape():
>>> f.reshape((2,2,3))
array([[[ 1, 2, 3],
[ 4, 5, 6]],
[[ 7, 8, 9],
[10, 12, 12]]])
Now add just jusing numpy.sum, giving the dimensions you want to add (in your case, dimensions 1 and 2 (dimensions being 0-indexed):
>>> numpy.sum(img3d,(1,2))
array([21, 58])
Just to summarize in a oneliner, you can do (variable names from your question):
>>> numpy.sum(numpy.array(image_inp).reshape(w,h,c),(1,2))
From the numpy manual on numpy.sum:
numpy.sum
numpy.sum(a, axis=None, dtype=None, out=None, keepdims=numpy._globals._NoValue>)
Sum of array elements over a given axis.
Parameters:
a : array_like Elements to sum.
axis : None or int or
tuple of ints, optional Axis or axes along which a sum is performed.
The default, axis=None, will sum all of the elements of the input
array. If axis is negative it counts from the last to the first axis.
New in version 1.7.0.: If axis is a tuple of ints, a sum is performed
on all of the axes specified in the tuple instead of a single axis or
all the axes as before.
If your matrix is set as your post implies with your "3D" matrix as an array of arrays:
M = [ [1, 2, 3,
4, 5, 6],
[ 7, 8, 9,
10,11,12],
]
array_of_sums = []
for pseudo_2D_matrix in M:
array_of_sums.append(sum(pseudo_2D_matrix))
If your 3D matrix, as a real three dimensional object, is set up as:
M = [
[ [ 1, 2, 3],
[ 4, 5, 6]
],
[ [ 7, 8, 9],
[10,11,12],
]
You could create a 1D array of sums by doing the following:
array_of_sums = []
for 2D_matrix in M:
s = 0
for row in 2D_matrix:
s += sum(row)
array_of_sums.append(s)
It's a bit unclear how your data are formatted, but hopefully you get the idea from these two examples.
EDIT:
In light of clarification on input you could easily accomplish this:
If dimensions w,h,c are given as dimensional breakout of the array [1,2,3,4,5,6,7,8,9,10,12], then you simply need to boundary off those regions and sum based on that:
input_array = [1,2,3,4,5,6,7,8,9,10,11,12]
w,h,c = 2,3,2
array_of_sums = []
i = 0
while i < w:
array_of_sums.append(sum(input_array[i*h*c:(i+1)*h*c]))
i += 1
as a simplified method:
def sum_2D_slices(w,h,c,matrix_3D):
return [sum(matrix_3D[i*h*c:(i+1)*h*c]) for i in range(w)]

How to slice a multidimensional array in python/numpy in a way to select specific row, column and depth?

I'm trying to convert my MATLAB code to python but I'm having some issues. This code is supposed to segment letters from a picture.
Here's the whole code in MATLAB:
he = imread('r.jpg');
imshow(he);
%C = makecform(type) creates the color transformation structure C that defines the color space conversion specified by type.
cform = makecform('srgb2lab');
%To perform the transformation, pass the color transformation structure as an argument to the applycform function.
lab_he = applycform(he,cform);
%convert to double precision
ab = double(lab_he(:,:,2:3));
%size of dimension in 2D array
nrows = size(ab,1);
ncols = size(ab,2);
%B = reshape(A,sz1,...,szN) reshapes A into a sz1-by-...-by-szN array where
%sz1,...,szN indicates the size of each dimension. You can specify a single
% dimension size of [] to have the dimension size automatically calculated,
% such that the number of elements in B matches the number of elements in A.
% For example, if A is a 10-by-10 matrix, then reshape(A,2,2,[]) reshapes
% the 100 elements of A into a 2-by-2-by-25 array.
ab = reshape(ab,nrows*ncols,2);
% repeat the clustering 3 times to avoid local minima
nColors = 3;
[cluster_idx, cluster_center] = kmeans(ab,nColors,'distance','sqEuclidean', ...
'Replicates',3);
pixel_labels = reshape(cluster_idx,nrows,ncols);
imshow(pixel_labels,[]), title('image labeled by cluster index');
segmented_images = cell(1,3);
rgb_label = repmat(pixel_labels,[1 1 3]);
for k = 1:nColors
color = he;
color(rgb_label ~= k) = 0;
segmented_images{k} = color;
end
figure,imshow(segmented_images{1}), title('objects in cluster 1');
figure,imshow(segmented_images{2}), title('objects in cluster 2');
figure,imshow(segmented_images{3}), title('objects in cluster 3');
mean_cluster_value = mean(cluster_center,2);
[tmp, idx] = sort(mean_cluster_value);
blue_cluster_num = idx(1);
L = lab_he(:,:,1);
blue_idx = find(pixel_labels == blue_cluster_num);
L_blue = L(blue_idx);
is_light_blue = im2bw(L_blue,graythresh(L_blue));
% target_labels = repmat(uint8(0),[nrows ncols]);
% target_labels(blue_idx(is_light_blue==false)) = 1;
% target_labels = repmat(target_labels,[1 1 3]);
% blue_target = he;
% blue_target(target_labels ~= 1) = 0;
% figure,imshow(blue_target), title('blue');
Here's what I have in Python so far:
import cv2
import numpy as np
from matplotlib import pyplot as plt
import sys
img = cv2.imread('r.jpg',1)
print "original image: ", img.shape
cv2.imshow('BGR', img)
img1 = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img2 = cv2.cvtColor(img1, cv2.COLOR_RGB2LAB)
cv2.imshow('RGB', img1)
cv2.imshow('LAB', img2) #differs from the LAB color space in MATLAB (need to patch maybe?)
print "LAB converted image: ", img2.shape
print "LAB converted image dimension", img2.ndim #says the image is a 3 dimensional array
img2a = img2.shape[2][1:2]
print "Slicing the LAB converted image", img2a
#we need to convert that to double precision
print img2.dtype
img2a = img2.astype(np.uint64) #convert to double precision
print img2a.dtype
#print img2a
row = img2a.shape[0] #returns number of rows of img2a
column = img2a.shape[1] #returns number of columns of img2a
print "row: ", row #matches the MATLAB version
print "column: ", column #matchees the MATLAB version
rowcol = row * column
k = cv2.waitKey(0)
if k == 27: # wait for ESC key to exit
cv2.destroyAllWindows()
elif k == ord('s'): # wait for 's' key to save and exit
cv2.imwrite('final image',final_image)
cv2.destroyAllWindows()
Now the part i'm currently stuck in is that here in Matlab code, I have lab_he(:,:,2:3) which means all the rows and all the columns from depth 2 and 3. However when I try to replicate that in Python img2a = img2.shape[2][1:2] but it doesn't work or makes sense. Please help.
In Octave/MATLAB
octave:29> x=reshape(1:(2*3*4),3,2,4);
octave:30> x(:,:,2:3)
ans =
ans(:,:,1) =
7 10
8 11
9 12
ans(:,:,2) =
13 16
14 17
15 18
octave:31> size(x(:,:,2:3))
ans =
3 2 2
octave:33> x(:,:,2:3)(2,2,:)
ans(:,:,1) = 11
ans(:,:,2) = 17
In numpy:
In [13]: x=np.arange(1,1+2*3*4).reshape(3,2,4,order='F')
In [14]: x[:,:,1:3]
Out[14]:
array([[[ 7, 13],
[10, 16]],
[[ 8, 14],
[11, 17]],
[[ 9, 15],
[12, 18]]])
In [15]: _.shape
Out[15]: (3, 2, 2)
In [17]: x[:,:,1:3][1,1,:]
Out[17]: array([11, 17])
Or with numpy normal 'C' ordering, and indexing on the 1st dimension
In [18]: y=np.arange(1,1+2*3*4).reshape(4,2,3)
In [19]: y[1:3,:,:]
Out[19]:
array([[[ 7, 8, 9],
[10, 11, 12]],
[[13, 14, 15],
[16, 17, 18]]])
In [20]: y[1:3,:,:][:,1,1]
Out[20]: array([11, 17])
Same indexing ideas, though matching numbers and shapes requires some care, not only with the 0 v 1 index base. A 3d array is displayed in a different arangement. Octave divides it into blocks on the last index (its primary iterator), numpy iterates on the first index.
In 3d it makes more sense to talk about first, 2nd, 3rd dimensions rather than row,col,depth. In 4d you run out of names. :)
I had to reshape array at specific depth, and I programmed a little recursive function to do so:
def recursive_array_cutting(tab, depth, i, min, max):
if(i==depth):
tab = tab[min:max]
return tab
temp_list = []
nb_subtab = len(tab)
for index in range(nb_subtab):
temp_list.append(recursive_array_cutting(tab[index], depth, i+1, min, max))
return np.asanyarray(temp_list)
It allow to get all array/values between the min and the max of a specific depth, for instance, if you have a (3, 4) tab = [[0, 1, 2, 3], [0, 1, 2, 3], [0, 1, 2, 3]] and only want the last two values of the deepest array, you call like this : tab = recursive_array_cutting(tab, 1, 0, 0, 2) to get the output : [[0 1][0 1][0 1]].
If you have a more complexe array like this tab = [[[0, 1, 2, 3], [1, 1, 2, 3], [2, 1, 2, 3]], [[0, 1, 2, 3], [1, 1, 2, 3], [2, 1, 2, 3]], [[0, 1, 2, 3], [1, 1, 2, 3], [2, 1, 2, 3]]] (3, 3, 4) and want a (3, 2, 4) array, you can call like this : tab = recursive_array_cutting(tab, 1, 0, 0, 2) to get this output, and get rid of the last dimension in depth 1.
Function like this surely exist in numpy, but I did not found it.

Categories

Resources