Sum simultaneously 3 dimensions of 4-dimensional matrix - python

I have 4d matrix, mat4. Instead of using np.sum(mat, axis=) and defining axis 3 times for sum three dimensions, is there a way I can sum many dimensions of matrix, simultaneously?
#Sum `mat4` except `axis=0`
mat4 = np.random.rand(2,3,4,5)
matsum = np.sum(mat4, axis=3)
matsum = np.sum(matsum, axis=2)
matsum = np.sum(matsum, axis=1)
print matsum.shape
>> (2L,)

axis keyword can be either an int or a tuple, so
you can simply use
np.sum(mat, axis=(1, 2, 3))
From np.sum docs:
If axis is a tuple of ints, a sum is performed on all of the axes
specified in the tuple instead of a single axis or all the axes as before.

Related

Avoid for-loop in array operation from 1D array of size kN to 1D array of size N

Input: I have a 1D numpy array of size 3N. Every three elements of the 3N-size array can be denoted as xi, yi, zi where i = 1 ... N.
Output: With this array as input, I want to return an output array of size N, that does a numpy operation for every three elements (i.e., xi, yi, zi). That means, the value of ith element of the output array is numpy_operation(xi, yi, zi).
Explaination: Here is a figure to illustrate the problem:
Here, the input array has the size of 99 (= 3 x 33). The output array has the size of 33. As an example, I am doing numpy.argmin(...) operation for every three elements of the input array.
Is there any trick so that I can avoid for-loop like this?
for i in range(len(output_array)):
output_array[i] = np.argmin(input_array[i * 3 : i * 3 + 3])
Reshape and argmin:
arr.reshape(-1,3).argmin(axis=1)
You can reshape and apply np.argmin() on axis=1
n = np.random.random((3*100))
out = np.argmin(n.reshape((-1,3)), axis=1)
print(n.shape)
print(out.shape)
(300,)
(100,)

How to flatten an array to a matrix in Numpy?

I am looking for an elegant way to flatten an array of arbitrary shape to a matrix based on a single parameter that specifies the dimension to retain. For illustration, I would like
def my_func(input, dim):
# code to compute output
return output
Given for example an input array of shape 2x3x4, output should be for dim=0 an array of shape 12x2; for dim=1 an array of shape 8x3; for dim=2 an array of shape 6x8. If I want to flatten the last dimension only, then this is easily accomplished by
input.reshape(-1, input.shape[-1])
But I would like to add the functionality of adding dim (elegantly, without going through all possible cases + checking with if conditions, etc.). It might be possible by first swapping dimensions, so that the dimension of interest is trailing and then applying the operation above.
Any help?
We can permute axes and reshape -
# a is input array; axis is input axis/dim
np.moveaxis(a,axis,-1).reshape(-1,a.shape[axis])
Functionally, it's basically pushing the specified axis to the back and then reshaping keeping that axis length to form the second axis and merging rest of the axes to form the first axis.
Sample runs -
In [32]: a = np.random.rand(2,3,4)
In [33]: axis = 0
In [34]: np.moveaxis(a,axis,-1).reshape(-1,a.shape[axis]).shape
Out[34]: (12, 2)
In [35]: axis = 1
In [36]: np.moveaxis(a,axis,-1).reshape(-1,a.shape[axis]).shape
Out[36]: (8, 3)
In [37]: axis = 2
In [38]: np.moveaxis(a,axis,-1).reshape(-1,a.shape[axis]).shape
Out[38]: (6, 4)

numpy: summing along all but last axis

If I have an ndarray of arbitrary shape and I would like to compute the sum along all but the last axis I can, for instance, achieve it by doing
all_but_last = tuple(range(arr.ndim - 1))
sum = arr.sum(axis=all_but_last)
Now, tuple(range(arr.ndim - 1)) is not exactly intuitive I feel. Is there a more elegant/numpy-esque way to do this?
Moreover, if I want to do this for multiple arrays of varying shape, I'll have to calculate a separate dimension tuple for each of them. Is there a more canonical way to say "regardless of what the dimensions are, just give me all but one axis"?
You could reshape the array so that all axes except the last are flattened (e.g. shape (k, l, m, n) becomes (k*l*m, n)), and then sum over the first axis.
For example, here's your calculation:
In [170]: arr.shape
Out[170]: (2, 3, 4)
In [171]: arr.sum(axis=tuple(range(arr.ndim - 1)))
Out[171]: array([2.85994792, 2.8922732 , 2.29051163, 2.77275709])
Here's the alternative:
In [172]: arr.reshape(-1, arr.shape[-1]).sum(axis=0)
Out[172]: array([2.85994792, 2.8922732 , 2.29051163, 2.77275709])
You can use np.apply_over_axes to sum over multiple axes.
np.apply_over_axes(np.sum, arr, [0,2]) #sum over axes 0 and 2
np.apply_over_axes(np.sum, arr, range(arr.ndim - 1)) #sum over all but last axis

what's the mean of "reshape(-1,1,2)"

x = np.linspace(0,10, 5)
y = 2*x
points = np.array([x, y]).T.reshape(-1, 1, 2)
What's the mean of the third line?I know the mean of reshape(m,n), but what does reshape(-1, 1, 2) means?
Your question is not entirely clear, so I'm guessing the -1 part is what troubles you.
From the documantaion:
The new shape should be compatible with the original shape. If an integer, then the result will be a 1-D array of that length. One shape dimension can be -1. In this case, the value is inferred from the length of the array and remaining dimensions.
The whole line meaning is this (breaking it down for simplicity):
points = np.array([x, y]) -> create a 2 X 5 np.array consisting of x,y
.T -> transpose
.reshape(-1, 1, 2) -> reshape it, in this case to a 5X1X2 array (as can seen by the output of points.shape [(5L, 1L, 2L)])
vertices = np.array([[100,300],[200,200],[400,300],[200,400]],np.int32)
vertices.shape
pts = vertices.reshape((-1,1,2))
refer this image
Consider the above code
here we have created set of vertices for to be plotted on a image using opencv but opencv expects 3d array but we only have vertices in 2d array.So the .reshape((-1,1,2)) allows us to keep the original array intact while adding the 3rd dimension to the array(Notice the extra brackets added to the list).This third dimension coontains the details for colors i.e RGB

Multiply a 1d array x 2d array python

I have a 2d array and a 1d array and I need to multiply each element in the 1d array x each element in the 2d array columns. It's basically a matrix multiplication but numpy won't allow matrix multiplication because of the 1d array. This is because matrices are inherently 2d in numpy. How can I get around this problem? This is an example of what I want:
FrMtx = np.zeros(shape=(24,24)) #2d array
elem = np.zeros(24, dtype=float) #1d array
Result = np.zeros(shape=(24,24), dtype=float) #2d array to store results
some_loop to increment i:
some_other_loop to increment j:
Result[i][j] = (FrMtx[i][j] x elem[j])
Numerous efforts have given me errors such as arrays used as indices must be of integer or boolean type
Due to the NumPy broadcasting rules, a simple
Result = FrMtx * elem
Will give the desired result.
You should be able to just multiply your arrays together, but its not immediately obvious what 'direction' the arrays will be multiplied since the matrix is square. To be more explicit about which axes are being multiplied, I find it is helpful to always multiply arrays that have the same number of dimensions.
For example, to multiply the columns:
mtx = np.zeros(shape=(5,7))
col = np.zeros(shape=(5,))
result = mtx * col.reshape((5, 1))
By reshaping col to (5,1), we guarantee that axis 0 of mtx is multiplied against axis 0 of col. To multiply rows:
mtx = np.zeros(shape=(5,7))
row = np.zeros(shape=(7,))
result = mtx * row.reshape((1, 7))
This guarantees that axis 1 in mtx is multiplied by axis 0 in row.

Categories

Resources