Indexing 2d array with 2d array in Numpy - python

I have a question that bothers me for a few days.
Let's assume we define a 2d array in Numpy:
x = np.array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
We also define a 1d array for indexing, let's say:
ind = np.array([2,1])
If we will try x[ind] we will get:
array([[6, 7, 8],
[3, 4, 5]])
which makes a lot of sense: Row number 2 and row numer 1 of x.
If we will run: x[:,ind] we will get:
array([[2, 1],
[5, 4],
[8, 7]])
Again, it makes a lot of sense - we receive column number 2 followed by column number 1
Now we will define the index array as 2d:
ind = np.array([[2,1],
[2,2]])
If we run x[ind] we get:
array([[[6, 7, 8],
[3, 4, 5]],
[[6, 7, 8],
[6, 7, 8]]])
Again, it makes sense - for each row in the indexing 2d array we receive a 2d array that represent the corresponding rows from the original 2d array x.
However, if we run x[:,ind] we receive the next array:
array([[[2, 1],
[2, 2]],
[[5, 4],
[5, 5]],
[[8, 7],
[8, 8]]])
I don't understand this output since it returns specific item in the indexed rows, but not the full rows. I would assume, that just like the case of x[:,ind] when it was 1d array, we will receive 2d arrays that include the original columns from the original x array.

In the last case with the indexing array:
print(ind)
array([[2, 1],
[2, 2]])
Since ind is a 2D array of shape (2,2), and your taking a full slice along the first axis, with ind you'll be indexing along the columns of A on each of its rows. So for instance by indexing the second row [3, 4, 5] with ind, you'll get the elements at indices 2->5, 1->4, 2->5 and 2->5 again, with the resulting shape being the same as ind, so [[5,4][5,5]].
The same for each of its rows resulting in a (3,2,2) shaped array.

Related

axis in numpy.ufunc.outer

numpy.ufunc.outer is like Mathematica Outer[], but what I can't seem to figure out is how to treat a two dimensional array as a one dimensional array of one dimensional arrays. That is, suppose
a = [[1, 2], [3, 4]] and b = [[4, 5], [6, 7]]. I want to compute the two dimensional array whose ijth element is the distance (however I define it) between the ith row of a and the jth row of b, so in this case, if we use the supnorm distance, the result will be [[3, 5], [1, 3]]. Obviously one can write a loop, but that seems morally wrong, and precisely what ufunc.outer is meant to avoid.
In [309]: a = np.array([[1, 2], [3, 4]]); b = np.array([[4, 5], [6, 7]])
With broadcasting we can take the row differences:
In [310]: a[:,None,:]-b[None,:,:]
Out[310]:
array([[[-3, -3],
[-5, -5]],
[[-1, -1],
[-3, -3]]])
and reduce those with the max/abs on the last axis (I think that's what you mean by the sup norm:
In [311]: np.abs(a[:,None,:]-b[None,:,:]).max(axis=-1)
Out[311]:
array([[3, 5],
[1, 3]])
With subtract.outer, I have to select a subset of the results, and then transpose:
In [318]: np.subtract.outer(a,b)[:,[0,1],:,[0,1]].transpose(2,1,0)
Out[318]:
array([[[-3, -3],
[-1, -1]],
[[-5, -5],
[-3, -3]]])
I don't see any axis controls in the outer docs. Since broadcasting gives finer control, I haven't seen much use of the ufunc.outer feature.

how do I add two numpy arrays correctly?

np_mat = np.array([[1, 2], [3, 4], [5, 6]])
np_mat + np.array([10, 10])
I am confused what the difference between np.array([10, 10]) and np.array([[10, 10]]) is. In school I learnt that only matrices with the same dimensions can be added. When I use the shape method on np.array([10, 10]) it gives me (2,)...what does that mean? How is it possible to add np_mat and np.array([10, 10])? The dimensions don't look the same to me. What do I not understand?
It looks like numpy is bending the rules of mathematics here. Indeed, it sums the second matrix [10, 10] with each element of the first [[1, 2], [3, 4], [5, 6]].
This is called https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html. The shape of [10, 10] is (2, ) (that is, mathematically, 2) and that of [[1, 2], [3, 4], [5, 6]] is (3, 2) (that is, mathematically, 3 x 2). Therefore, from general broadcasting rules, you should get a result of shape (3, 2) (that is, mathematically, 3 x 2).
I am confused what the difference between np.array([10, 10]) and np.array([[10, 10]]) is.
The first is an array. The second is an array of arrays (in memory, it is in fact one single array, but this is not relevant here). You could think of the first as a column vector (a matrix of size 2 x 1) and the second as a line vector (a matrix of size 1 x 2). However, be warned that the distinction between line and column vectors is irrelevant in mathematics until you start interpreting vectors as matrices.
You cannot add two arrays of different sizes.
But those two both have first dimension, length, equal to 2. (that is len(a) == len(b))
Shape (2,) means that the array is one-dimensional and the first dimension is of size 2.
np.array([[1, 2], [3, 4], [5, 6]]) has shape (3, 2) which means two-dimensional (3x2).
But you can add them since they are of different dimensions, and numpy coerces a number to an arbitrary array full of this same number. This is called broadcasting in numpy.
I.e. your code gets equivalent results to:
np_mat = np.array([[1, 2], [3, 4], [5, 6]])
np_mat + 10
Or to:
np_mat = np.array([[1, 2], [3, 4], [5, 6]])
np_mat + np.array([[10, 10], [10, 10], [10, 10]])

Efficiently change order of numpy array

I have a 3 dimensional numpy array. The dimension can go up to 128 x 64 x 8192. What I want to do is to change the order in the first dimension by interchanging pairwise.
The only idea I had so far is to create a list of the indices in the correct order.
order = [1,0,3,2...127,126]
data_new = data[order]
I fear, that this is not very efficient but I have no better idea so far
You could reshape to split the first axis into two axes, such that latter of those axes is of length 2 and then flip the array along that axis with [::-1] and finally reshape back to original shape.
Thus, we would have an implementation like so -
a.reshape(-1,2,*a.shape[1:])[:,::-1].reshape(a.shape)
Sample run -
In [170]: a = np.random.randint(0,9,(6,3))
In [171]: order = [1,0,3,2,5,4]
In [172]: a[order]
Out[172]:
array([[0, 8, 5],
[4, 5, 6],
[0, 0, 2],
[7, 3, 8],
[1, 6, 3],
[2, 4, 4]])
In [173]: a.reshape(-1,2,*a.shape[1:])[:,::-1].reshape(a.shape)
Out[173]:
array([[0, 8, 5],
[4, 5, 6],
[0, 0, 2],
[7, 3, 8],
[1, 6, 3],
[2, 4, 4]])
Alternatively, if you are looking to efficiently create those constantly flipping indices order, we could do something like this -
order = np.arange(data.shape[0]).reshape(-1,2)[:,::-1].ravel()

Numpy: operations on columns (or rows) of NxM array

This may be a silly question, but I've just started using numpy and I have to figure out how to perform some simple operations.
Suppose that I have the 2x3 array
array([[1, 3, 5],
[2, 4, 6]])
And that I want to perform some operation on the first column, for example subtract 1 to all the elements to get
array([[0, 3, 5],
[1, 4, 6]])
How can I perform such an operation?
arr
# array([[1, 3, 5],
# [2, 4, 6]])
arr[:,0] = arr[:,0] - 1 # choose the first column here, subtract one and
# assign it back to the same column
arr
# array([[0, 3, 5],
# [1, 4, 6]])

understanding numpy's dstack function

I have some trouble understanding what numpy's dstack function is actually doing. The documentation is rather sparse and just says:
Stack arrays in sequence depth wise (along third axis).
Takes a sequence of arrays and stack them along the third axis
to make a single array. Rebuilds arrays divided by dsplit.
This is a simple way to stack 2D arrays (images) into a single
3D array for processing.
So either I am really stupid and the meaning of this is obvious or I seem to have some misconception about the terms 'stacking', 'in sequence', 'depth wise' or 'along an axis'. However, I was of the impression that I understood these terms in the context of vstack and hstack just fine.
Let's take this example:
In [193]: a
Out[193]:
array([[0, 3],
[1, 4],
[2, 5]])
In [194]: b
Out[194]:
array([[ 6, 9],
[ 7, 10],
[ 8, 11]])
In [195]: dstack([a,b])
Out[195]:
array([[[ 0, 6],
[ 3, 9]],
[[ 1, 7],
[ 4, 10]],
[[ 2, 8],
[ 5, 11]]])
First of all, a and b don't have a third axis so how would I stack them along 'the third axis' to begin with? Second of all, assuming a and b are representations of 2D-images, why do I end up with three 2D arrays in the result as opposed to two 2D-arrays 'in sequence'?
It's easier to understand what np.vstack, np.hstack and np.dstack* do by looking at the .shape attribute of the output array.
Using your two example arrays:
print(a.shape, b.shape)
# (3, 2) (3, 2)
np.vstack concatenates along the first dimension...
print(np.vstack((a, b)).shape)
# (6, 2)
np.hstack concatenates along the second dimension...
print(np.hstack((a, b)).shape)
# (3, 4)
and np.dstack concatenates along the third dimension.
print(np.dstack((a, b)).shape)
# (3, 2, 2)
Since a and b are both two dimensional, np.dstack expands them by inserting a third dimension of size 1. This is equivalent to indexing them in the third dimension with np.newaxis (or alternatively, None) like this:
print(a[:, :, np.newaxis].shape)
# (3, 2, 1)
If c = np.dstack((a, b)), then c[:, :, 0] == a and c[:, :, 1] == b.
You could do the same operation more explicitly using np.concatenate like this:
print(np.concatenate((a[..., None], b[..., None]), axis=2).shape)
# (3, 2, 2)
* Importing the entire contents of a module into your global namespace using import * is considered bad practice for several reasons. The idiomatic way is to import numpy as np.
Let x == dstack([a, b]). Then x[:, :, 0] is identical to a, and x[:, :, 1] is identical to b. In general, when dstacking 2D arrays, dstack produces an output such that output[:, :, n] is identical to the nth input array.
If we stack 3D arrays rather than 2D:
x = numpy.zeros([2, 2, 3])
y = numpy.ones([2, 2, 4])
z = numpy.dstack([x, y])
then z[:, :, :3] would be identical to x, and z[:, :, 3:7] would be identical to y.
As you can see, we have to take slices along the third axis to recover the inputs to dstack. That's why dstack behaves the way it does.
I'd like to take a stab at visually explaining this (even though the accepted answer makes enough sense, it took me a few seconds to rationalise this to my mind).
If we imagine the 2d-arrays as a list of lists, where the 1st axis gives one of the inner lists and the 2nd axis gives the value in that list, then the visual representation of the OP's arrays will be this:
a = [
[0, 3],
[1, 4],
[2, 5]
]
b = [
[6, 9],
[7, 10],
[8, 11]
]
# Shape of each array is [3,2]
Now, according to the current documentation, the dstack function adds a 3rd axis, which means each of the arrays end up looking like this:
a = [
[[0], [3]],
[[1], [4]],
[[2], [5]]
]
b = [
[[6], [9]],
[[7], [10]],
[[8], [11]]
]
# Shape of each array is [3,2,1]
Now, stacking both these arrays in the 3rd dimension simply means that the result should look, as expected, like this:
dstack([a,b]) = [
[[0, 6], [3, 9]],
[[1, 7], [4, 10]],
[[2, 8], [5, 11]]
]
# Shape of the combined array is [3,2,2]
Hope this helps.
Because you mention "images", I think this example would be useful. If you're using Keras to train a 2D convolution network with the input X, then it is best to keep X with the dimension (#images, dim1ofImage, dim2ofImage).
image1 = np.array([[4,2],[5,5]])
image2 = np.array([[3,1],[6,7]])
image1 = image1.reshape(1,2,2)
image2 = image2.reshape(1,2,2)
X = np.stack((image1,image2),axis=1)
X
array([[[[4, 2],
[5, 5]],
[[3, 1],
[6, 7]]]])
np.shape(X)
X = X.reshape((2,2,2))
X
array([[[4, 2],
[5, 5]],
[[3, 1],
[6, 7]]])
X[0] # image 1
array([[4, 2],
[5, 5]])
X[1] # image 2
array([[3, 1],
[6, 7]])

Categories

Resources