Operations on 'N' dimensional numpy arrays - python

I am attempting to generalize some Python code to operate on arrays of arbitrary dimension. The operations are applied to each vector in the array. So for a 1D array, there is simply one operation, for a 2-D array it would be both row and column-wise (linearly, so order does not matter). For example, a 1D array (a) is simple:
b = operation(a)
where 'operation' is expecting a 1D array. For a 2D array, the operation might proceed as
for ii in range(0,a.shape[0]):
b[ii,:] = operation(a[ii,:])
for jj in range(0,b.shape[1]):
c[:,ii] = operation(b[:,ii])
I would like to make this general where I do not need to know the dimension of the array beforehand, and not have a large set of if/elif statements for each possible dimension.
Solutions that are general for 1 or 2 dimensions are ok, though a completely general solution would be preferred. In reality, I do not imagine needing this for any dimension higher than 2, but if I can see a general example I will learn something!
Extra information:
I have a matlab code that uses cells to do something similar, but I do not fully understand how it works. In this example, each vector is rearranged (basically the same function as fftshift in numpy.fft). Not sure if this helps, but it operates on an array of arbitrary dimension.
function aout=foldfft(ain)
nd = ndims(ain);
for k = 1:nd
nx = size(ain,k);
kx = floor(nx/2);
idx{k} = [kx:nx 1:kx-1];
end
aout = ain(idx{:});

In Octave, your MATLAB code does:
octave:19> size(ain)
ans =
2 3 4
octave:20> idx
idx =
{
[1,1] =
1 2
[1,2] =
1 2 3
[1,3] =
2 3 4 1
}
and then it uses the idx cell array to index ain. With these dimensions it 'rolls' the size 4 dimension.
For 5 and 6 the index lists would be:
2 3 4 5 1
3 4 5 6 1 2
The equivalent in numpy is:
In [161]: ain=np.arange(2*3*4).reshape(2,3,4)
In [162]: idx=np.ix_([0,1],[0,1,2],[1,2,3,0])
In [163]: idx
Out[163]:
(array([[[0]],
[[1]]]), array([[[0],
[1],
[2]]]), array([[[1, 2, 3, 0]]]))
In [164]: ain[idx]
Out[164]:
array([[[ 1, 2, 3, 0],
[ 5, 6, 7, 4],
[ 9, 10, 11, 8]],
[[13, 14, 15, 12],
[17, 18, 19, 16],
[21, 22, 23, 20]]])
Besides the 0 based indexing, I used np.ix_ to reshape the indexes. MATLAB and numpy use different syntax to index blocks of values.
The next step is to construct [0,1],[0,1,2],[1,2,3,0] with code, a straight forward translation.
I can use np.r_ as a short cut for turning 2 slices into an index array:
In [201]: idx=[]
In [202]: for nx in ain.shape:
kx = int(np.floor(nx/2.))
kx = kx-1;
idx.append(np.r_[kx:nx, 0:kx])
.....:
In [203]: idx
Out[203]: [array([0, 1]), array([0, 1, 2]), array([1, 2, 3, 0])]
and pass this through np.ix_ to make the appropriate index tuple:
In [204]: ain[np.ix_(*idx)]
Out[204]:
array([[[ 1, 2, 3, 0],
[ 5, 6, 7, 4],
[ 9, 10, 11, 8]],
[[13, 14, 15, 12],
[17, 18, 19, 16],
[21, 22, 23, 20]]])
In this case, where 2 dimensions don't roll anything, slice(None) could replace those:
In [210]: idx=(slice(None),slice(None),[1,2,3,0])
In [211]: ain[idx]
======================
np.roll does:
indexes = concatenate((arange(n - shift, n), arange(n - shift)))
res = a.take(indexes, axis)
np.apply_along_axis is another function that constructs an index array (and turns it into a tuple for indexing).

If you are looking for a programmatic way to index the k-th dimension an n-dimensional array, then numpy.take might help you.
An implementation of foldfft is given below as an example:
In[1]:
import numpy as np
def foldfft(ain):
result = ain
nd = len(ain.shape)
for k in range(nd):
nx = ain.shape[k]
kx = (nx+1)//2
shifted_index = list(range(kx,nx)) + list(range(kx))
result = np.take(result, shifted_index, k)
return result
a = np.indices([3,3])
print("Shape of a = ", a.shape)
print("\nStarting array:\n\n", a)
print("\nFolded array:\n\n", foldfft(a))
Out[1]:
Shape of a = (2, 3, 3)
Starting array:
[[[0 0 0]
[1 1 1]
[2 2 2]]
[[0 1 2]
[0 1 2]
[0 1 2]]]
Folded array:
[[[2 0 1]
[2 0 1]
[2 0 1]]
[[2 2 2]
[0 0 0]
[1 1 1]]]

You could use numpy.ndarray.flat, which allows you to linearly iterate over a n dimensional numpy array. Your code should then look something like this:
b = np.asarray(x)
for i in range(len(x.flat)):
b.flat[i] = operation(x.flat[i])

The folks above provided multiple appropriate solutions. For completeness, here is my final solution. In this toy example for the case of 3 dimensions, the function 'ops' replaces the first and last element of a vector with 1.
import numpy as np
def ops(s):
s[0]=1
s[-1]=1
return s
a = np.random.rand(4,4,3)
print '------'
print 'Array a'
print a
print '------'
for ii in np.arange(a.ndim):
a = np.apply_along_axis(ops,ii,a)
print '------'
print ' Axis',str(ii)
print a
print '------'
print ' '
The resulting 3D array has a 1 in every element on the 'border' with the numbers in the middle of the array unchanged. This is of course a toy example; however ops could be any arbitrary function that operates on a 1D vector.
Flattening the vector will also work; I chose not to pursue that simply because the book-keeping is more difficult and apply_along_axis is the simplest approach.
apply_along_axis reference page

Related

What does np.einsum act on? [duplicate]

How does np.einsum work?
Given arrays A and B, their matrix multiplication followed by transpose is computed using (A # B).T, or equivalently, using:
np.einsum("ij, jk -> ki", A, B)
(Note: this answer is based on a short blog post about einsum I wrote a while ago.)
What does einsum do?
Imagine that we have two multi-dimensional arrays, A and B. Now let's suppose we want to...
multiply A with B in a particular way to create new array of products; and then maybe
sum this new array along particular axes; and then maybe
transpose the axes of the new array in a particular order.
There's a good chance that einsum will help us do this faster and more memory-efficiently than combinations of the NumPy functions like multiply, sum and transpose will allow.
How does einsum work?
Here's a simple (but not completely trivial) example. Take the following two arrays:
A = np.array([0, 1, 2])
B = np.array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
We will multiply A and B element-wise and then sum along the rows of the new array. In "normal" NumPy we'd write:
>>> (A[:, np.newaxis] * B).sum(axis=1)
array([ 0, 22, 76])
So here, the indexing operation on A lines up the first axes of the two arrays so that the multiplication can be broadcast. The rows of the array of products are then summed to return the answer.
Now if we wanted to use einsum instead, we could write:
>>> np.einsum('i,ij->i', A, B)
array([ 0, 22, 76])
The signature string 'i,ij->i' is the key here and needs a little bit of explaining. You can think of it in two halves. On the left-hand side (left of the ->) we've labelled the two input arrays. To the right of ->, we've labelled the array we want to end up with.
Here is what happens next:
A has one axis; we've labelled it i. And B has two axes; we've labelled axis 0 as i and axis 1 as j.
By repeating the label i in both input arrays, we are telling einsum that these two axes should be multiplied together. In other words, we're multiplying array A with each column of array B, just like A[:, np.newaxis] * B does.
Notice that j does not appear as a label in our desired output; we've just used i (we want to end up with a 1D array). By omitting the label, we're telling einsum to sum along this axis. In other words, we're summing the rows of the products, just like .sum(axis=1) does.
That's basically all you need to know to use einsum. It helps to play about a little; if we leave both labels in the output, 'i,ij->ij', we get back a 2D array of products (same as A[:, np.newaxis] * B). If we say no output labels, 'i,ij->, we get back a single number (same as doing (A[:, np.newaxis] * B).sum()).
The great thing about einsum however, is that it does not build a temporary array of products first; it just sums the products as it goes. This can lead to big savings in memory use.
A slightly bigger example
To explain the dot product, here are two new arrays:
A = array([[1, 1, 1],
[2, 2, 2],
[5, 5, 5]])
B = array([[0, 1, 0],
[1, 1, 0],
[1, 1, 1]])
We will compute the dot product using np.einsum('ij,jk->ik', A, B). Here's a picture showing the labelling of the A and B and the output array that we get from the function:
You can see that label j is repeated - this means we're multiplying the rows of A with the columns of B. Furthermore, the label j is not included in the output - we're summing these products. Labels i and k are kept for the output, so we get back a 2D array.
It might be even clearer to compare this result with the array where the label j is not summed. Below, on the left you can see the 3D array that results from writing np.einsum('ij,jk->ijk', A, B) (i.e. we've kept label j):
Summing axis j gives the expected dot product, shown on the right.
Some exercises
To get more of a feel for einsum, it can be useful to implement familiar NumPy array operations using the subscript notation. Anything that involves combinations of multiplying and summing axes can be written using einsum.
Let A and B be two 1D arrays with the same length. For example, A = np.arange(10) and B = np.arange(5, 15).
The sum of A can be written:
np.einsum('i->', A)
Element-wise multiplication, A * B, can be written:
np.einsum('i,i->i', A, B)
The inner product or dot product, np.inner(A, B) or np.dot(A, B), can be written:
np.einsum('i,i->', A, B) # or just use 'i,i'
The outer product, np.outer(A, B), can be written:
np.einsum('i,j->ij', A, B)
For 2D arrays, C and D, provided that the axes are compatible lengths (both the same length or one of them of has length 1), here are a few examples:
The trace of C (sum of main diagonal), np.trace(C), can be written:
np.einsum('ii', C)
Element-wise multiplication of C and the transpose of D, C * D.T, can be written:
np.einsum('ij,ji->ij', C, D)
Multiplying each element of C by the array D (to make a 4D array), C[:, :, None, None] * D, can be written:
np.einsum('ij,kl->ijkl', C, D)
Grasping the idea of numpy.einsum() is very easy if you understand it intuitively. As an example, let's start with a simple description involving matrix multiplication.
To use numpy.einsum(), all you have to do is to pass the so-called subscripts string as an argument, followed by your input arrays.
Let's say you have two 2D arrays, A and B, and you want to do matrix multiplication. So, you do:
np.einsum("ij, jk -> ik", A, B)
Here the subscript string ij corresponds to array A while the subscript string jk corresponds to array B. Also, the most important thing to note here is that the number of characters in each subscript string must match the dimensions of the array (i.e., two chars for 2D arrays, three chars for 3D arrays, and so on). And if you repeat the chars between subscript strings (j in our case), then that means you want the einsum to happen along those dimensions. Thus, they will be sum-reduced (i.e., that dimension will be gone).
The subscript string after this -> symbol represent the dimensions of our resultant array.
If you leave it empty, then everything will be summed and a scalar value is returned as the result. Else the resultant array will have dimensions according to the subscript string. In our example, it'll be ik. This is intuitive because we know that for the matrix multiplication to work, the number of columns in array A has to match the number of rows in array B which is what is happening here (i.e., we encode this knowledge by repeating the char j in the subscript string)
Here are some more examples illustrating the use/power of np.einsum() in implementing some common tensor or nd-array operations, succinctly.
Inputs
# a vector
In [197]: vec
Out[197]: array([0, 1, 2, 3])
# an array
In [198]: A
Out[198]:
array([[11, 12, 13, 14],
[21, 22, 23, 24],
[31, 32, 33, 34],
[41, 42, 43, 44]])
# another array
In [199]: B
Out[199]:
array([[1, 1, 1, 1],
[2, 2, 2, 2],
[3, 3, 3, 3],
[4, 4, 4, 4]])
1) Matrix multiplication (similar to np.matmul(arr1, arr2))
In [200]: np.einsum("ij, jk -> ik", A, B)
Out[200]:
array([[130, 130, 130, 130],
[230, 230, 230, 230],
[330, 330, 330, 330],
[430, 430, 430, 430]])
2) Extract elements along the main-diagonal (similar to np.diag(arr))
In [202]: np.einsum("ii -> i", A)
Out[202]: array([11, 22, 33, 44])
3) Hadamard product (i.e. element-wise product of two arrays) (similar to arr1 * arr2)
In [203]: np.einsum("ij, ij -> ij", A, B)
Out[203]:
array([[ 11, 12, 13, 14],
[ 42, 44, 46, 48],
[ 93, 96, 99, 102],
[164, 168, 172, 176]])
4) Element-wise squaring (similar to np.square(arr) or arr ** 2)
In [210]: np.einsum("ij, ij -> ij", B, B)
Out[210]:
array([[ 1, 1, 1, 1],
[ 4, 4, 4, 4],
[ 9, 9, 9, 9],
[16, 16, 16, 16]])
5) Trace (i.e. sum of main-diagonal elements) (similar to np.trace(arr))
In [217]: np.einsum("ii -> ", A)
Out[217]: 110
6) Matrix transpose (similar to np.transpose(arr))
In [221]: np.einsum("ij -> ji", A)
Out[221]:
array([[11, 21, 31, 41],
[12, 22, 32, 42],
[13, 23, 33, 43],
[14, 24, 34, 44]])
7) Outer Product (of vectors) (similar to np.outer(vec1, vec2))
In [255]: np.einsum("i, j -> ij", vec, vec)
Out[255]:
array([[0, 0, 0, 0],
[0, 1, 2, 3],
[0, 2, 4, 6],
[0, 3, 6, 9]])
8) Inner Product (of vectors) (similar to np.inner(vec1, vec2))
In [256]: np.einsum("i, i -> ", vec, vec)
Out[256]: 14
9) Sum along axis 0 (similar to np.sum(arr, axis=0))
In [260]: np.einsum("ij -> j", B)
Out[260]: array([10, 10, 10, 10])
10) Sum along axis 1 (similar to np.sum(arr, axis=1))
In [261]: np.einsum("ij -> i", B)
Out[261]: array([ 4, 8, 12, 16])
11) Batch Matrix Multiplication
In [287]: BM = np.stack((A, B), axis=0)
In [288]: BM
Out[288]:
array([[[11, 12, 13, 14],
[21, 22, 23, 24],
[31, 32, 33, 34],
[41, 42, 43, 44]],
[[ 1, 1, 1, 1],
[ 2, 2, 2, 2],
[ 3, 3, 3, 3],
[ 4, 4, 4, 4]]])
In [289]: BM.shape
Out[289]: (2, 4, 4)
# batch matrix multiply using einsum
In [292]: BMM = np.einsum("bij, bjk -> bik", BM, BM)
In [293]: BMM
Out[293]:
array([[[1350, 1400, 1450, 1500],
[2390, 2480, 2570, 2660],
[3430, 3560, 3690, 3820],
[4470, 4640, 4810, 4980]],
[[ 10, 10, 10, 10],
[ 20, 20, 20, 20],
[ 30, 30, 30, 30],
[ 40, 40, 40, 40]]])
In [294]: BMM.shape
Out[294]: (2, 4, 4)
12) Sum along axis 2 (similar to np.sum(arr, axis=2))
In [330]: np.einsum("ijk -> ij", BM)
Out[330]:
array([[ 50, 90, 130, 170],
[ 4, 8, 12, 16]])
13) Sum all the elements in array (similar to np.sum(arr))
In [335]: np.einsum("ijk -> ", BM)
Out[335]: 480
14) Sum over multiple axes (i.e. marginalization)
(similar to np.sum(arr, axis=(axis0, axis1, axis2, axis3, axis4, axis6, axis7)))
# 8D array
In [354]: R = np.random.standard_normal((3,5,4,6,8,2,7,9))
# marginalize out axis 5 (i.e. "n" here)
In [363]: esum = np.einsum("ijklmnop -> n", R)
# marginalize out axis 5 (i.e. sum over rest of the axes)
In [364]: nsum = np.sum(R, axis=(0,1,2,3,4,6,7))
In [365]: np.allclose(esum, nsum)
Out[365]: True
15) Double Dot Products (similar to np.sum(hadamard-product) cf. 3)
In [772]: A
Out[772]:
array([[1, 2, 3],
[4, 2, 2],
[2, 3, 4]])
In [773]: B
Out[773]:
array([[1, 4, 7],
[2, 5, 8],
[3, 6, 9]])
In [774]: np.einsum("ij, ij -> ", A, B)
Out[774]: 124
16) 2D and 3D array multiplication
Such a multiplication could be very useful when solving linear system of equations (Ax = b) where you want to verify the result.
# inputs
In [115]: A = np.random.rand(3,3)
In [116]: b = np.random.rand(3, 4, 5)
# solve for x
In [117]: x = np.linalg.solve(A, b.reshape(b.shape[0], -1)).reshape(b.shape)
# 2D and 3D array multiplication :)
In [118]: Ax = np.einsum('ij, jkl', A, x)
# indeed the same!
In [119]: np.allclose(Ax, b)
Out[119]: True
On the contrary, if one has to use np.matmul() for this verification, we have to do couple of reshape operations to achieve the same result like:
# reshape 3D array `x` to 2D, perform matmul
# then reshape the resultant array to 3D
In [123]: Ax_matmul = np.matmul(A, x.reshape(x.shape[0], -1)).reshape(x.shape)
# indeed correct!
In [124]: np.allclose(Ax, Ax_matmul)
Out[124]: True
Bonus: Read more math here : Einstein-Summation and definitely here: Tensor-Notation
When reading einsum equations, I've found it the most helpful to just be able to
mentally boil them down to their imperative versions.
Let's start with the following (imposing) statement:
C = np.einsum('bhwi,bhwj->bij', A, B)
Working through the punctuation first we see that we have two 4-letter comma-separated blobs - bhwi and bhwj, before the arrow,
and a single 3-letter blob bij after it. Therefore, the equation produces a rank-3 tensor result from two rank-4 tensor inputs.
Now, let each letter in each blob be the name of a range variable. The position at which the letter appears in the blob
is the index of the axis that it ranges over in that tensor.
The imperative summation that produces each element of C, therefore, has to start with three nested for loops, one for each index of C.
for b in range(...):
for i in range(...):
for j in range(...):
# the variables b, i and j index C in the order of their appearance in the equation
C[b, i, j] = ...
So, essentially, you have a for loop for every output index of C. We'll leave the ranges undetermined for now.
Next we look at the left-hand side - are there any range variables there that don't appear on the right-hand side? In our case - yes, h and w.
Add an inner nested for loop for every such variable:
for b in range(...):
for i in range(...):
for j in range(...):
C[b, i, j] = 0
for h in range(...):
for w in range(...):
...
Inside the innermost loop we now have all indices defined, so we can write the actual summation and
the translation is complete:
# three nested for-loops that index the elements of C
for b in range(...):
for i in range(...):
for j in range(...):
# prepare to sum
C[b, i, j] = 0
# two nested for-loops for the two indexes that don't appear on the right-hand side
for h in range(...):
for w in range(...):
# Sum! Compare the statement below with the original einsum formula
# 'bhwi,bhwj->bij'
C[b, i, j] += A[b, h, w, i] * B[b, h, w, j]
If you've been able to follow the code thus far, then congratulations! This is all you need to be able to read einsum equations. Notice in particular how the original einsum formula maps to the final summation statement in the snippet above. The for-loops and range bounds are just fluff and that final statement is all you really need to understand what's going on.
For the sake of completeness, let's see how to determine the ranges for each range variable. Well, the range of each variable is simply the length of the dimension(s) which it indexes.
Obviously, if a variable indexes more than one dimension in one or more tensors, then the lengths of each of those dimensions have to be equal.
Here's the code above with the complete ranges:
# C's shape is determined by the shapes of the inputs
# b indexes both A and B, so its range can come from either A.shape or B.shape
# i indexes only A, so its range can only come from A.shape, the same is true for j and B
assert A.shape[0] == B.shape[0]
assert A.shape[1] == B.shape[1]
assert A.shape[2] == B.shape[2]
C = np.zeros((A.shape[0], A.shape[3], B.shape[3]))
for b in range(A.shape[0]): # b indexes both A and B, or B.shape[0], which must be the same
for i in range(A.shape[3]):
for j in range(B.shape[3]):
# h and w can come from either A or B
for h in range(A.shape[1]):
for w in range(A.shape[2]):
C[b, i, j] += A[b, h, w, i] * B[b, h, w, j]
Another view on np.einsum
Most answers here explain by example, I thought I'd give an additional point of view.
You can see einsum as a generalized matrix summation operator. The string given contains the subscripts which are labels representing axes. I like to think of it as your operation definition. The subscripts provide two apparent constraints:
the number of axes for each input array,
axis size equality between inputs.
Let's take the initial example: np.einsum('ij,jk->ki', A, B). Here the constraints 1. translates to A.ndim == 2 and B.ndim == 2, and 2. to A.shape[1] == B.shape[0].
As you will see later down, there are other constraints. For instance:
labels in the output subscript must not appear more than once.
labels in the output subscript must appear in the input subscripts.
When looking at ij,jk->ki, you can think of it as:
which components from the input arrays will contribute to component [k, i] of the output array.
The subscripts contain the exact definition of the operation for each component of the output array.
We will stick with operation ij,jk->ki, and the following definitions of A and B:
>>> A = np.array([[1,4,1,7], [8,1,2,2], [7,4,3,4]])
>>> A.shape
(3, 4)
>>> B = np.array([[2,5], [0,1], [5,7], [9,2]])
>>> B.shape
(4, 2)
The output, Z, will have a shape of (B.shape[1], A.shape[0]) and could naively be constructed in the following way. Starting with a blank array for Z:
Z = np.zeros((B.shape[1], A.shape[0]))
for i in range(A.shape[0]):
for j in range(A.shape[1]):
for k range(B.shape[0]):
Z[k, i] += A[i, j]*B[j, k] # ki <- ij*jk
np.einsum is about accumulating contributions in the output array. Each (A[i,j], B[j,k]) pair is seen contributing to each Z[k, i] component.
You might have noticed, it looks extremely similar to how you would go about computing general matrix multiplications...
Minimal implementation
Here is a minimal implementation of np.einsum in Python. This should help understand what is really going on under the hood.
As we go along I will keep referring to the previous example. Defining inputs as [A, B].
np.einsum can actually take more than two inputs. In the following, we will focus on the general case: n inputs and n input subscripts. The main goal is to find the domain of iteration, i.e. the cartesian product of all our ranges.
We can't rely on manually writing for loops, simply because we don't know how many there will be. The main idea is this: we need to find all unique labels (I will use key and keys to refer to them), find the corresponding array shape, then create ranges for each one, and compute the product of the ranges using itertools.product to get the domain of study.
index
keys
constraints
sizes
ranges
1
'i'
A.shape[0]
3
range(0, 3)
2
'j'
A.shape[1] == B.shape[0]
4
range(0, 4)
0
'k'
B.shape[1]
2
range(0, 2)
The domain of study is the cartesian product: range(0, 2) x range(0, 3) x range(0, 4).
Subscripts processing:
>>> expr = 'ij,jk->ki'
>>> qry_expr, res_expr = expr.split('->')
>>> inputs_expr = qry_expr.split(',')
>>> inputs_expr, res_expr
(['ij', 'jk'], 'ki')
Find the unique keys (labels) in the input subscripts:
>>> keys = set([(key, size) for keys, input in zip(inputs_expr, inputs)
for key, size in list(zip(keys, input.shape))])
{('i', 3), ('j', 4), ('k', 2)}
We should be checking for constraints (as well as in the output subscript)! Using set is a bad idea but it will work for the purpose of this example.
Get the associated sizes (used to initialize the output array) and construct the ranges (used to create our domain of iteration):
>>> sizes = dict(keys)
{'i': 3, 'j': 4, 'k': 2}
>>> ranges = [range(size) for _, size in keys]
[range(0, 2), range(0, 3), range(0, 4)]
We need an list containing the keys (labels):
>>> to_key = sizes.keys()
['k', 'i', 'j']
Compute the cartesian product of the ranges
>>> domain = product(*ranges)
Note: [itertools.product][1] returns an iterator which gets consumed over time.
Initialize the output tensor as:
>>> res = np.zeros([sizes[key] for key in res_expr])
We will be looping over domain:
>>> for indices in domain:
... pass
For each iteration, indices will contain the values on each axis. In our example, that would provide i, j, and k as a tuple: (k, i, j). For each input (A and B) we need to determine which component to fetch. That's A[i, j] and B[j, k], yes! However, we don't have variables i, j, and k, literally speaking.
We can zip indices with to_key to create a mapping between each key (label) and its current value:
>>> vals = dict(zip(to_key, indices))
To get the coordinates for the output array, we use vals and loop over the keys: [vals[key] for key in res_expr]. However, to use these to index the output array, we need to wrap it with tuple and zip to separate the indices along each axis:
>>> res_ind = tuple(zip([vals[key] for key in res_expr]))
Same for the input indices (although there can be several):
>>> inputs_ind = [tuple(zip([vals[key] for key in expr])) for expr in inputs_expr]
We will use a itertools.reduce to compute the product of all contributing components:
>>> def reduce_mult(L):
... return reduce(lambda x, y: x*y, L)
Overall the loop over the domain looks like:
>>> for indices in domain:
... vals = {k: v for v, k in zip(indices, to_key)}
... res_ind = tuple(zip([vals[key] for key in res_expr]))
... inputs_ind = [tuple(zip([vals[key] for key in expr]))
... for expr in inputs_expr]
...
... res[res_ind] += reduce_mult([M[i] for M, i in zip(inputs, inputs_ind)])
>>> res
array([[70., 44., 65.],
[30., 59., 68.]])
That's pretty close to what np.einsum('ij,jk->ki', A, B) returns!
I found NumPy: The tricks of the trade (Part II) instructive
We use -> to indicate the order of the output array. So think of 'ij, i->j' as having left hand side (LHS) and right hand side (RHS). Any repetition of labels on the LHS computes the product element wise and then sums over. By changing the label on the RHS (output) side, we can define the axis in which we want to proceed with respect to the input array, i.e. summation along axis 0, 1 and so on.
import numpy as np
>>> a
array([[1, 1, 1],
[2, 2, 2],
[3, 3, 3]])
>>> b
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
>>> d = np.einsum('ij, jk->ki', a, b)
Notice there are three axes, i, j, k, and that j is repeated (on the left-hand-side). i,j represent rows and columns for a. j,k for b.
In order to calculate the product and align the j axis we need to add an axis to a. (b will be broadcast along(?) the first axis)
a[i, j, k]
b[j, k]
>>> c = a[:,:,np.newaxis] * b
>>> c
array([[[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8]],
[[ 0, 2, 4],
[ 6, 8, 10],
[12, 14, 16]],
[[ 0, 3, 6],
[ 9, 12, 15],
[18, 21, 24]]])
j is absent from the right-hand-side so we sum over j which is the second axis of the 3x3x3 array
>>> c = c.sum(1)
>>> c
array([[ 9, 12, 15],
[18, 24, 30],
[27, 36, 45]])
Finally, the indices are (alphabetically) reversed on the right-hand-side so we transpose.
>>> c.T
array([[ 9, 18, 27],
[12, 24, 36],
[15, 30, 45]])
>>> np.einsum('ij, jk->ki', a, b)
array([[ 9, 18, 27],
[12, 24, 36],
[15, 30, 45]])
>>>
Lets make 2 arrays, with different, but compatible dimensions to highlight their interplay
In [43]: A=np.arange(6).reshape(2,3)
Out[43]:
array([[0, 1, 2],
[3, 4, 5]])
In [44]: B=np.arange(12).reshape(3,4)
Out[44]:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
Your calculation, takes a 'dot' (sum of products) of a (2,3) with a (3,4) to produce a (4,2) array. i is the 1st dim of A, the last of C; k the last of B, 1st of C. j is 'consumed' by the summation.
In [45]: C=np.einsum('ij,jk->ki',A,B)
Out[45]:
array([[20, 56],
[23, 68],
[26, 80],
[29, 92]])
This is the same as np.dot(A,B).T - it's the final output that's transposed.
To see more of what happens to j, change the C subscripts to ijk:
In [46]: np.einsum('ij,jk->ijk',A,B)
Out[46]:
array([[[ 0, 0, 0, 0],
[ 4, 5, 6, 7],
[16, 18, 20, 22]],
[[ 0, 3, 6, 9],
[16, 20, 24, 28],
[40, 45, 50, 55]]])
This can also be produced with:
A[:,:,None]*B[None,:,:]
That is, add a k dimension to the end of A, and an i to the front of B, resulting in a (2,3,4) array.
0 + 4 + 16 = 20, 9 + 28 + 55 = 92, etc; Sum on j and transpose to get the earlier result:
np.sum(A[:,:,None] * B[None,:,:], axis=1).T
# C[k,i] = sum(j) A[i,j (,k) ] * B[(i,) j,k]
Once get familiar with the dummy index (the common or repeating index) and the summation along the dummy index in the Einstein Summation (einsum), the output -> shaping is easy. Hence focus on:
Dummy index, the common index j in np.einsum("ij,jk->ki", a, b)
Summation along the dummy index j
Dummy index
For einsum("...", a, b), element wise multiplication always happens in-between matrices a and b regardless there are common indices or not. We can have einsum('xy,wz', a, b) which has no common index in the subscripts 'xy,wz'.
If there is a common index, as j in "ij,jk->ki", then it is called a dummy index in the Einstein Summation.
Einstein Summation
An index that is summed over is a summation index, in this case "i". It is also called a dummy index since any symbol can replace "i" without changing the meaning of the expression provided that it does not collide with index symbols in the same term.
Summation along the dummy index
For np.einsum("ij,j", a, b) of the green rectangle in the diagram, j is the dummy index. The element-wise multiplication a[i][j] * b[j] is summed up along the j axis as Σ ( a[i][j] * b[j] ).
It is a dot product np.inner(a[i], b) for each i. Here being specific with np.inner() and avoiding np.dot as it is not strictly a mathematical dot product operation.
Einstein Summation Convention: an Introduction
The dummy index can appear anywhere as long as the rules (please see the youtube for details) are met.
For the dummy index i in np.einsum(“ik,il", a, b), it is a row index of the matrices a and b, hence a column from a and that from b are extracted to generate the dot products.
Output form
Because the summation occurs along the dummy index, the dummy index disappears in the result matrix, hence i from “ik,il" is dropped and form the shape (k,l). We can tell np.einsum("... -> <shape>") to specify the output form by the output subscript labels with -> identifier.
See the explicit mode in numpy.einsum for details.
In explicit mode the output can be directly controlled by specifying
output subscript labels. This requires the identifier ‘->’ as well as
the list of output subscript labels. This feature increases the
flexibility of the function since summing can be disabled or forced
when required. The call np.einsum('i->', a) is like np.sum(a, axis=-1), and np.einsum('ii->i', a) is like np.diag(a). The difference
is that einsum does not allow broadcasting by default. Additionally
np.einsum('ij,jh->ih', a, b) directly specifies the order of the
output subscript labels and therefore returns matrix multiplication,
unlike the example above in implicit mode.
Without a dummy index
An example for having no dummy index in the einsum.
A term (subscript Indices, e.g. "ij") selects an element in each array.
Each left-hand side element is applied on the element on the right-hand side for element-wise multiplication (hence multiplication always happens).
a has shape (2,3) each element of which is applied to b of shape (2,2). Hence it creates a matrix of shape (2,3,2,2) without no summation as (i,j), (k.l) are all free indices.
# --------------------------------------------------------------------------------
# For np.einsum("ij,kl", a, b)
# 1-1: Term "ij" or (i,j), two free indices, selects selects an element a[i][j].
# 1-2: Term "kl" or (k,l), two free indices, selects selects an element b[k][l].
# 2: Each a[i][j] is applied on b[k][l] for element-wise multiplication a[i][j] * b[k,l]
# --------------------------------------------------------------------------------
# for (i,j) in a:
# for(k,l) in b:
# a[i][j] * b[k][l]
np.einsum("ij,kl", a, b)
array([[[[ 0, 0],
[ 0, 0]],
[[10, 11],
[12, 13]],
[[20, 22],
[24, 26]]],
[[[30, 33],
[36, 39]],
[[40, 44],
[48, 52]],
[[50, 55],
[60, 65]]]])
Examples
dot products from matrix A rows and matrix B columns
A = np.matrix('0 1 2; 3 4 5')
B = np.matrix('0 -3; -1 -4; -2 -5');
np.einsum('ij,ji->i', A, B)
# Same with
np.diagonal(np.matmul(A,B))
(A*B).diagonal()
---
[ -5 -50]
[ -5 -50]
[[ -5 -50]]
I think the simplest example is in tensorflow docs
There are four steps to convert your equation to einsum notation. Lets take this equation as an example C[i,k] = sum_j A[i,j] * B[j,k]
First we drop the variable names. We get ik = sum_j ij * jk
We drop the sum_j term as it is implicit. We get ik = ij * jk
We replace * with ,. We get ik = ij, jk
The output is on the RHS and is separated with -> sign. We get ij, jk -> ik
The einsum interpreter just runs these 4 steps in reverse. All indices missing in the result are summed over.
Here are some more examples from the docs
# Matrix multiplication
einsum('ij,jk->ik', m0, m1) # output[i,k] = sum_j m0[i,j] * m1[j, k]
# Dot product
einsum('i,i->', u, v) # output = sum_i u[i]*v[i]
# Outer product
einsum('i,j->ij', u, v) # output[i,j] = u[i]*v[j]
# Transpose
einsum('ij->ji', m) # output[j,i] = m[i,j]
# Trace
einsum('ii', m) # output[j,i] = trace(m) = sum_i m[i, i]
# Batch matrix multiplication
einsum('aij,ajk->aik', s, t) # out[a,i,k] = sum_j s[a,i,j] * t[a, j, k]

numpy - column-wise and row-wise sums of a given 2d matrix

I have this numpy matrix (ndarray).
array([[ 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10],
[11, 12, 13, 14, 15],
[16, 17, 18, 19, 20],
[21, 22, 23, 24, 25]])
I want to calculate the column-wise and row-wise sums.
I know this is done by calling respectively
np.sum(mat, axis=0) ### column-wise sums
np.sum(mat, axis=1) ### row-wise sums
but I cannot understand these two calls.
Why is axis 0 giving me the sums column-by-column?!
Shouldn't it be the other way around?
I thought the rows are axis 0, and the columns are axis 1.
What I am seeing as a behavior here looks counter-intuitive
(but I am sure it's OK, I guess I am just missing something important).
I am just looking for some intuitive explanation here.
Thanks in advance.
Intuition around arrays and axes
I want to offer 3 types of intuitions here.
Graphical (How to imagine them visually)
Physical (How they are physically stored)
Logical (How to work with them logically)
Graphical intuition
Consider a numpy array as a n-dimensional object. This n-dimensional object contains elements in each of the directions as below.
Axes in this representation are the direction of the tensor. So, a 2D matrix has only 2 axes, while a 4D tensor has 4 axes.
Sum in a given axis can be essentially considered as a reduction in that direction. Imagine a 3D tensor being squashed in such a way that it becomes flat (a 2D tensor). The axis tells us which direction to squash or reduce it in.
Physical intuition
Numpy stores its ndarrays as contiguous blocks of memory. Each element is stored in a sequential manner every n bytes after the previous.
(images referenced from this excellent SO post)
So if your 3D array looks like this -
Then in memory its stores as -
When retrieving an element (or a block of elements), NumPy calculates how many strides (bytes) it needs to traverse to get the next element in that direction/axis. So, for the above example, for axis=2 it has to traverse 8 bytes (depending on the datatype) but for axis=1 it has to traverse 8*4 bytes, and axis=0 it needs 8*8 bytes.
Axes in this representation is basically the series of next elements after a given stride. Consider the following array -
print(X)
print(X.strides)
[[0 2 1 4 0 0 0]
[5 0 0 0 0 0 0]
[8 0 0 0 0 0 0]
[0 0 0 0 0 0 0]
[0 0 1 0 0 0 0]
[0 0 0 1 0 0 0]]
#Strides (bytes) required to traverse in each axis.
(56, 8)
In the above array, every element after 56 bytes from any element is the next element in axis=0 and every element after 8 bytes from any element is in axis=1. (except from the last element)
Sum or reduction in this regards means taking a sum of every element in that strided series. So, sum over axis=0 means that I need to sum [0,5,8,0,0,0], [2,0,0,0,0,0], ... and sum over axis=1 means just summing [0 2 1 4 0 0 0] , [5 0 0 0 0 0 0], ...
Logical intuition
This interpretation has to do with element groupings. A numpy stores its ndarrays as groups of groups of groups ... of elements. Elements are grouped together and contain the last axis (axis=-1). Then another grouping over them creates another axis before it (axis=-2). The final outermost group is the axis=0.
These are 3 groups of 2 groups of 5 elements.
Similarly, the shape of a NumPy array is also determined by the same.
1D_array = [1,2,3]
2D_array = [[1,2,3]]
3D_array = [[[1,2,3]]]
...
Axes in this representation are the group in which elements are stored. The outermost group is axis=0 and the innermost group is axis=-1.
Sum or reduction in this regard means that I reducing elements across that specific group or axis. So, sum over axis=-1 means I sum over the innermost groups. Consider a (6, 5, 8) dimensional tensor. When I say I want a sum over some axis, I want to reduce the elements lying in that grouping / direction to a single value that is equal to their sum.
So,
np.sum(arr, axis=-1) will reduce the inner most groups (of length 8) into a single value and return (6,5,1) or (6,5).
np.sum(arr, axis=-2) will reduce the elements that lie in the 1st axis (or -2nd axis) direction and reduce those to a single value returning (6,1,8) or (6,8)
np.sum(arr, axis=0) will similarly reduce the tensor to (1,5,8) or (5,8).
Hope these 3 intuitions are beneficial to anyone trying to understand how axes and NumPy tensors work in general and how to build an intuitive understanding to work better with them.
Let's start with a one dimensional example:
a, b, c, d, e = 0, 1, 2, 3, 4
arr = np.array([a, b, c, d, e])
If you do,
arr.sum(0)
Output
10
That is the sum of the elements of the array
a + b + c + d + e
Now before moving on a 2 dimensional example. Let's clarify that in numpy the sum of two 1 dimensional arrays is done element wise, for example:
a = np.array([1, 2, 3, 4, 5])
b = np.array([6, 7, 8, 9, 10])
print(a + b)
Output
[ 7 9 11 13 15]
Now if we change our initial variables to arrays, instead of scalars, to create a two dimensional array and do the sum
a = np.array([1, 2, 3, 4, 5])
b = np.array([6, 7, 8, 9, 10])
c = np.array([11, 12, 13, 14, 15])
d = np.array([16, 17, 18, 19, 20])
e = np.array([21, 22, 23, 24, 25])
arr = np.array([a, b, c, d, e])
print(arr.sum(0))
Output
[55 60 65 70 75]
The output is the same as for the 1 dimensional example, i.e. the sum of the elements of the array:
a + b + c + d + e
Just that now the elements of the arrays are 1 dimensional arrays and the sum of those elements is applied. Now before explaining the results, for axis = 1, let's consider an alternative notation to the notation across axis = 0, basically:
np.array([arr[0, :], arr[1, :], arr[2, :], arr[3, :], arr[4, :]]).sum(0) # [55 60 65 70 75]
That is we took full slices in all other indices that were not the first dimension. If we swap to:
res = np.array([arr[:, 0], arr[:, 1], arr[:, 2], arr[:, 3], arr[:, 4]]).sum(0)
print(res)
Output
[ 15 40 65 90 115]
We get the result of the sum along axis=1. So to sum it up you are always summing elements of the array. The axis will indicate how this elements are constructed.
Intuitively, 'axis 0' goes from top to bottom and 'axis 1' goes from left to right. Therefore, when you sum along 'axis 0' you get the column sum, and along 'axis 1' you get the row sum.
As you go along 'axis 0', the row number increases. As you go along 'axis 1' the column number increases.
Think of a 1-dimension array:
mat=array([ 1, 2, 3, 4, 5])
Its items are called by mat[0], mat[1], etc
If you do:
np.sum(mat, axis=0)
it will return 15
In the background, it sums all items with mat[0], mat[1], mat[2], mat[3], mat[4]
meaning the first index (axis=0)
Now consider a 2-D array:
mat=array([[ 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10],
[11, 12, 13, 14, 15],
[16, 17, 18, 19, 20],
[21, 22, 23, 24, 25]])
When you ask for
np.sum(mat, axis=0)
it will again sum all items based on the first index (axis=0) keeping all the rest same. This means that
mat[0][1], mat[1][1], mat[2][1], mat[3][1], mat[4][1]
will give one sum
mat[0][2], mat[1][2], mat[2][2], mat[3][2], mat[4][2]
will give another one, etc
If you consider a 3-D array, the logic will be the same. Every sum will be calculated on the same axis (index) keeping all the rest same. Sums on axis=0 will be produced by:
mat[0][1][1],mat[1][1][1],mat[2][1][1],mat[3][1][1],mat[4][1][1]
etc
Sums on axis=2 will be produced by:
mat[2][3][0], mat[2][3][1], mat[2][3][2], mat[2][3][3], mat[2][3][4]
etc
I hope you understand the logic. To keep things simple in your mind, consider axis=position of index in a chain index, eg axis=3 on a 7-mensional array will be:
mat[0][0][0][this is our axis][0][0][0]

Needing to assess smaller 3D arrays in larger 3D array with Numpy

I have to take a random integer 50x50x50 array and determine which contiguous 3x3x3 cube within it has the largest sum.
It seems like a lot of splitting features in Numpy don't work well unless the smaller cubes are evenly divisible into the larger one. Trying to work through the thought process I made a 48x48x48 cube that is just in order from 1 to 110,592. I then was thinking of reshaping it to a 4D array with the following code and assessing which of the arrays had the largest sum? when I enter this code though it splits the array in an order that is not ideal. I want the first array to be the 3x3x3 cube that would have been in the corner of the 48x48x48 cube. Is there a syntax that I can add to make this happen?
import numpy as np
arr1 = np.arange(0,110592)
arr2=np.reshape(arr1, (48,48,48))
arr3 = np.reshape(arr2, (4096, 3,3,3))
arr3
output:
array([[[[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8]],
[[ 9, 10, 11],
[ 12, 13, 14],
[ 15, 16, 17]],
[[ 18, 19, 20],
[ 21, 22, 23],
[ 24, 25, 26]]],
desired output:
array([[[[ 0, 1, 2],
[ 48, 49, 50],
[ 96, 97, 98]],
etc etc
Solution
There's a live version of this solution online you can try for yourself
There's a simple (kind of) solution to your original problem of finding the maximum 3x3x3 subcube in a 50x50x50 cube that's based on changing the input array's strides. This solution is completely vectorized (meaning no looping), and so should get the best possible performance out of Numpy:
import numpy as np
def cubecube(arr, cshape):
strides = (*arr.strides, *arr.strides)
shape = (*np.array(arr.shape) - cshape + 1, *cshape)
return np.lib.stride_tricks.as_strided(arr, shape=shape, strides=strides)
def maxcube(arr, cshape):
cc = cubecube(arr, cshape)
ccsums = cc.sum(axis=tuple(range(-arr.ndim, 0)))
ix = np.unravel_index(np.argmax(ccsums), ccsums.shape)[:arr.ndim]
return ix, cc[ix]
The maxcube function takes an array and the shape of the subcubes, and returns a tuple of (first-index-of-largest-cube, largest-cube). Here's an example of how to use maxcube:
shape = (50, 50, 50)
cshape = (3, 3, 3)
# set up a 50x50x50 array
arr = np.arange(np.prod(shape)).reshape(*shape)
# set one of the subcubes as the largest
arr[37, 26, 11] = 999999
ix, cube = maxcube(arr, cshape)
print('first index of largest cube: {}'.format(ix))
print('largest cube:\n{}'.format(cube))
which outputs:
first index of largest cube: (37, 26, 11)
largest cube:
[[[999999 93812 93813]
[ 93861 93862 93863]
[ 93911 93912 93913]]
[[ 96311 96312 96313]
[ 96361 96362 96363]
[ 96411 96412 96413]]
[[ 98811 98812 98813]
[ 98861 98862 98863]
[ 98911 98912 98913]]]
In depth explanation
A cube of cubes
What you have is a 48x48x48 cube, but what you want is a cube of smaller cubes. One can be converted to the other by altering its strides. For a 48x48x48 array of dtype int64, the stride will originally be set as (48*48*8, 48*8, 8). The first value of each non-overlapping 3x3x3 subcube can be iterated over with a stride of (3*48*48*8, 3*48*8, 3*8). Combine these strides to get the strides of the cube of cubes:
# Set up a 48x48x48 array, like in OP's example
arr = np.arange(48**3).reshape(48,48,48)
shape = (16,16,16,3,3,3)
strides = (3*48*48*8, 3*48*8, 3*8, 48*48*8, 48*8, 8)
# restride into a 16x16x16 array of 3x3x3 cubes
arr2 = np.lib.stride_tricks.as_strided(arr, shape=shape, strides=strides)
arr2 is a view of arr (meaning that they share data, so no copy needs to be made) with a shape of (16,16,16,3,3,3). The ijkth 3x3 cube in arr can be accessed by passing the indices to arr2:
i,j,k = 0,0,0
print(arr2[i,j,k])
Output:
[[[ 0 1 2]
[ 48 49 50]
[ 96 97 98]]
[[2304 2305 2306]
[2352 2353 2354]
[2400 2401 2402]]
[[4608 4609 4610]
[4656 4657 4658]
[4704 4705 4706]]]
You can get the sums of all of the subcubes by just summing across the inner axes:
sumOfSubcubes = arr2.sum(3,4,5)
This will yield a 16x16x16 array in which each value is the sum of a non-overlapping 3x3x3 subcube from your original array. This solves the specific problem about the 48x48x48 array that the OP asked about. Restriding can also be used to find all of the overlapping 3x3x3 cubes, as in the cubecube function above.
Your thought process with the 48x48x48 cube goes in the right direction insofar that there are 48³ different contiguous 3x3x3 cubes within the 50x50x50 array, though I don't understand why you would want to reshape it.
What you could do is add all 27 values of each 3x3x3 cube to a 48x48x48 dimensional array by going through all 27 permutations of adjacent slices and find the maximum over it. The found entry will give you the index tuple coordinate_index of the cube corner that is closest to the origin of your original array.
import numpy as np
np.random.seed(0)
array_shape = np.array((50,50,50), dtype=int)
cube_dim = np.array((3,3,3), dtype=int)
original_array = np.random.randint(array_shape)
reduced_shape = array_shape - cube_dim + 1
sum_array = np.zeros(reduced shape, dtype=int)
for i in range(cube_dim[0]):
for j in range(cube_dim[1]):
for k in range(cube_dim[2]):
sum_array += original_array[
i:-cube_dim[0]+1+i, j:-cube_dim[1]+1+j, k:-cube_dim[2]+1+k
]
flat_index = np.argmax(sum_array)
coordinate_index = np.unravel_index(flat_index, reduced_shape)
This method should be faster than looping over each of the 48³ index combinations to find the desired cube as it uses in place summation but in turn requires more memory. I'm not sure about it, but defining an (48³, 27) array with slices and using np.sum over the second axis could be even faster.
You can easily change the above code to find a cuboid with arbitrary side lengths instead.
This is a solution without many numpy functions, just numpy.sum. First define a squared matrix and then the size of the cube cs you are going to perform the summation within.
Just change cs to adjust the cube size and find other solutions. Following #Divakar suggestion, I have used a 4x4x4 array and I also store the location where the cube is location (just the vertex of the cube's origin)
import numpy as np
np.random.seed(0)
a = np.random.randint(0,9,(4,4,4))
print(a)
cs = 2 # Cube size
my_sum = 0
idx = None
for i in range(a.shape[0]-cs+2):
for j in range(a.shape[1]-cs+2):
for k in range(a.shape[2]-cs+2):
cube_sum = np.sum(a[i:i+cs, j:j+cs, k:k+cs])
print(cube_sum)
if cube_sum > my_sum:
my_sum = cube_sum
idx = (i,j,k)
print(my_sum, idx) # 42 (0, 0, 0)
This 3D array a is
[[[5 0 3 3]
[7 3 5 2]
[4 7 6 8]
[8 1 6 7]]
[[7 8 1 5]
[8 4 3 0]
[3 5 0 2]
[3 8 1 3]]
[[3 3 7 0]
[1 0 4 7]
[3 2 7 2]
[0 0 4 5]]
[[5 6 8 4]
[1 4 8 1]
[1 7 3 6]
[7 2 0 3]]]
And you get my_sum = 42 and idx = (0, 0, 0) for cs = 2. And my_sum = 112 and idx = (1, 0, 0) for cs = 3
Here is a cumsum based fast solution:
import numpy as np
nd = 3
cs = 3
N = 50
# create indices [cs-1:, ...], [:, cs-1:, ...], ...
fromcsm = *zip(*np.where(np.identity(nd, bool), np.s_[cs-1:], np.s_[:])),
# create indices [cs:, ...], [:, cs:, ...], ...
fromcs = *zip(*np.where(np.identity(nd, bool), np.s_[cs:], np.s_[:])),
# create indices [:cs, ...], [:, :cs, ...], ...
tocs = *zip(*np.where(np.identity(nd, bool), np.s_[:cs], np.s_[:])),
# create indices [:-cs, ...], [:, :-cs, ...], ...
tomcs = *zip(*np.where(np.identity(nd, bool), np.s_[:-cs], np.s_[:])),
# create indices [cs-1, ...], [:, cs-1, ...], ...
atcsm = *zip(*np.where(np.identity(nd, bool), cs-1, np.s_[:])),
def windowed_sum(a):
out = a.copy()
for i, (fcsm, fcs, tcs, tmcs, acsm) \
in enumerate(zip(fromcsm, fromcs, tocs, tomcs, atcsm)):
out[fcs] -= out[tmcs]
out[acsm] = out[tcs].sum(axis=i)
out = out[fcsm].cumsum(axis=i)
return out
This returns the sums over all the sub cubes. We can then use argmax and unravel_index to get the offset of the maximum cube. Example:
np.random.seed(0)
a = np.random.randint(0,9,(N,N,N))
s = windowed_sum(a)
idx = np.unravel_index(np.argmax(s,), s.shape)

Can't append numpy arrays after for loop?

After a for loop, I can not append each iteration into a single array:
in:
for a in l:
arr = np.asarray(a_lis)
print(arr)
How can I append and return in a single array the above three arrays?:
[[ 0.55133 0.58122 0.66129032 0.67562724 0.69354839 0.70609319
0.6702509 0.63799283 0.61827957 0.6155914 0.60842294 0.60215054
0.59946237 0.625448 0.60215054 0.60304659 0.59856631 0.59677419
0.59408602 0.61021505]
[ 0.58691756 0.6784946 0.64964158 0.66397849 0.67114695 0.66935484
0.67293907 0.66845878 0.65143369 0.640681 0.63530466 0.6344086
0.6281362 0.6281362 0.62634409 0.6281362 0.62903226 0.63799283
0.63709677 0.6978495]
[ 0.505018 0.53405018 0.59408602 0.65143369 0.66577061 0.66487455
0.65412186 0.64964158 0.64157706 0.63082437 0.62634409 0.6218638
0.62007168 0.6648746 0.62096774 0.62007168 0.62096774 0.62007168
0.62275986 0.81362 ]]
I tried to append as a list, using numpy's append, merge, and hstack. None of them worked. Any idea of how to get the previous output?
Use numpy.concatenate to join the arrays:
import numpy as np
a = np.array([[1, 2, 3, 4]])
b = np.array([[5, 6, 7, 8]])
arr = np.concatenate((a, b), axis=0)
print(arr)
# [[1 2 3 4]
# [5 6 7 8]]
Edit1: To do it inside the array (as mentioned in the comment) you can use numpy.vstack:
import numpy as np
for i in range(0, 3):
a = np.random.randint(0, 10, size=4)
if i == 0:
arr = a
else:
arr = np.vstack((arr, a))
print(arr)
# [[1 1 8 7]
# [2 4 9 1]
# [8 4 7 5]]
Edit2: Citing Iguananaut from the comments:
That said, using concatenate repeatedly can be costly. If you know the
size of the output in advance it's better to pre-allocate an array and
fill it as you go.

How to sum up (W * H) of 3D matrix and store it in 1D matrix with length=depth(third dimension of input matrix)

I want to sum up all elements (W * H) of 3D matrix and store it in 1D matrix with length=depth(third dimension of input matrix)
To make myself clear:
Input dimension = 1D in the form of (W * H * D).
Required output = 1D again with length=D
let's consider below 3D Matrix : 2 x 3 x 2.
Layer 1 Layer 2
[1, 2, 3 [7, 8, 9
4, 5, 6] 10, 11, 12]
output is 1D : [21, 57]
I am new to python and wrote like this:
def test(w, h, c, image_inp):
output = [image_inp[j * w + k] for i in enumerate(image_inp)
for j in range(0,h)
for k in range(0,w)
#image_inp[j * w + k] comment
]
printout(output)
I know this will copy the input array as it is to output array.
also output array length is not equal to Depth.
Some one please help me in getting this right
def test(w, h, c, image_inp):
output = [hwsum for i in enumerate(image_inp)
hwsum += wsum for j in range(0,h)
wsum += image_inp[j*w + k] for k in range(0,w)
#image_inp[j * w + k]
]
print "Calling outprint"
printout(output)
Note: I do not want to use numpy(with this it is working) or any math libraries.
reason being I am writing test code in python to evaluate a working on language.
EDIT:
input matrix will be entering the test function as 1D with w, h, c as arguments,
so it takes the form as:
[1,2,3,4,5,6,7,8,9,10,12],
with w, h, c have to compute considering input1D as 3D matrix.
thanks
Numpy is very suitable for slicing and manipulating single and multiple dimensional data. It is fast, easy to use and very "pythonic".
Following your example, you can just do:
>>> import numpy
>>> img3d=numpy.array([[[1,2,3],[4,5,6]],[[7,8,9],[10,12,12]]])
>>> img3d.shape
(2, 2, 3)
You can see here that img3d has 2 layers, 2 rows and 3 columns. You can just slice using indexing like this:
>>> img3d[0,:,:]
array([[1, 2, 3],
[4, 5, 6]])
To go from 3D to 1D, just use numpy.flatten():
>>> f=img3d.flatten()
>>> f
array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 12])
And reversed, use numpy.reshape():
>>> f.reshape((2,2,3))
array([[[ 1, 2, 3],
[ 4, 5, 6]],
[[ 7, 8, 9],
[10, 12, 12]]])
Now add just jusing numpy.sum, giving the dimensions you want to add (in your case, dimensions 1 and 2 (dimensions being 0-indexed):
>>> numpy.sum(img3d,(1,2))
array([21, 58])
Just to summarize in a oneliner, you can do (variable names from your question):
>>> numpy.sum(numpy.array(image_inp).reshape(w,h,c),(1,2))
From the numpy manual on numpy.sum:
numpy.sum
numpy.sum(a, axis=None, dtype=None, out=None, keepdims=numpy._globals._NoValue>)
Sum of array elements over a given axis.
Parameters:
a : array_like Elements to sum.
axis : None or int or
tuple of ints, optional Axis or axes along which a sum is performed.
The default, axis=None, will sum all of the elements of the input
array. If axis is negative it counts from the last to the first axis.
New in version 1.7.0.: If axis is a tuple of ints, a sum is performed
on all of the axes specified in the tuple instead of a single axis or
all the axes as before.
If your matrix is set as your post implies with your "3D" matrix as an array of arrays:
M = [ [1, 2, 3,
4, 5, 6],
[ 7, 8, 9,
10,11,12],
]
array_of_sums = []
for pseudo_2D_matrix in M:
array_of_sums.append(sum(pseudo_2D_matrix))
If your 3D matrix, as a real three dimensional object, is set up as:
M = [
[ [ 1, 2, 3],
[ 4, 5, 6]
],
[ [ 7, 8, 9],
[10,11,12],
]
You could create a 1D array of sums by doing the following:
array_of_sums = []
for 2D_matrix in M:
s = 0
for row in 2D_matrix:
s += sum(row)
array_of_sums.append(s)
It's a bit unclear how your data are formatted, but hopefully you get the idea from these two examples.
EDIT:
In light of clarification on input you could easily accomplish this:
If dimensions w,h,c are given as dimensional breakout of the array [1,2,3,4,5,6,7,8,9,10,12], then you simply need to boundary off those regions and sum based on that:
input_array = [1,2,3,4,5,6,7,8,9,10,11,12]
w,h,c = 2,3,2
array_of_sums = []
i = 0
while i < w:
array_of_sums.append(sum(input_array[i*h*c:(i+1)*h*c]))
i += 1
as a simplified method:
def sum_2D_slices(w,h,c,matrix_3D):
return [sum(matrix_3D[i*h*c:(i+1)*h*c]) for i in range(w)]

Categories

Resources