Less verbose way to flatten & join two arrays in NumPy? - python

I am trying to do the following.
I have two 2D arrays, X and Y. Each is 100x100 elements. I want to linearize/flatten them into 10,000x1 columns and then concatenate them so I am left with a single matrix that is 10,000x2. In MATLAB I could do the following
BigMatrix = [X(:) Y(:)]
I want do the same thing in Python. After playing around with it for a bit I've been able to replicate the MATLAB result, albeit in quite a verbose manner, shown below. Is there a better, more succinct way to accomplish this?
BigMatrix = np.concatenate(
(X.reshape((10000,1), order = 'F'),
Y.reshape((10000,1), order = 'F')),
axis=1)

There are multiple ways to achieve what you want, and what you have is perfectly valid. However here are some other methods that you might find more "succinct."
Using np.ndarray.flatten
Return a copy of the array collapsed into one dimension.
You can also specify whether to treat it as column major or row major.
In order to get the result you want (a m x 2 matrix, with each flattened matrix as a column) you can then use numpy.column_stack
BigMatrix = np.column_stack([X.flatten(order = 'F'), Y.flatten(order = 'F')])
Or if you are looking for really succinct, as #ssp mentioned, you can use numpy indexing routines (which basically give special behavior to slices). There are two for concatenation, one for each axis. r_ is for row-wise (the first axis) and c_ is for column-wise (the second axis) so to get your m x 2 matrix you can do:
BigMatrix = np.c_[X.flatten(order = 'F'), Y.flatten(order = 'F')]
Performance?
As far as performance goes, you might be better off with your original code, as #hpaulj suggests. Here is a simple timing of the three methods, where each method is done 1 million times for your size of 100x100 matrices.
from timeit import timeit
print("c_ w/ flatten", timeit(
setup="import numpy as np\nX=np.random.standard_normal((100,100))\nY=np.random.standard_normal((100,100))",
stmt="Z=np.c_[X.flatten(order='F'), Y.flatten(order='F')]"
))
print("column_stack w/ flatten", timeit(
setup="import numpy as np\nX=np.random.standard_normal((100,100))\nY=np.random.standard_normal((100,100))",
stmt="Z=np.column_stack((X.flatten(order='F'), Y.flatten(order='F')))"
))
print("concatenate w/ reshape", timeit(
setup="import numpy as np\nX=np.random.standard_normal((100,100))\nY=np.random.standard_normal((100,100))",
stmt="Z=np.concatenate((X.reshape((10000,1),order='F'), Y.reshape((10000,1),order='F')), axis=1)"
))
and we get
c_ w/ flatten 44.47710300699691
column_stack w/ flatten 29.201319813000737
concatenate w/ reshape 27.67507728200144
Surprisingly, the column_stack and flatten is comparable, while the index routine is significantly slower.
(If there is anything I missed with this performance analysis, let me know. I am not a performance guru).

With a small 2 array:
In [404]: x = np.arange(4).reshape(2,2)
reshape with order F is the most direct equivalent of the MATLAB (:) indexing, producing a (n,1) array. (Is x(:).' the syntax for a (1,n) matrix?)
In [405]: x1 = x.reshape((4,1),order='F')
In [406]: x
Out[406]:
array([[0, 1],
[2, 3]])
In [407]: x1
Out[407]:
array([[0],
[2],
[1],
[3]])
Joining two such 'column vectors' is easy:
In [408]: np.concatenate((x1,x1), axis=1)
Out[408]:
array([[0, 0],
[2, 2],
[1, 1],
[3, 3]])
np.stack is a version of concatenate that creates a new dimension and joins on that. With axis=0 it's the same as np.array((x,x))
In [409]: np.stack((x,x), axis=2)
Out[409]:
array([[[0, 0],
[1, 1]],
[[2, 2],
[3, 3]]])
A order F reshape creates the 2 column array as before:
In [411]: np.stack((x,x), axis=2).reshape((-1,2),order='F')
Out[411]:
array([[0, 0],
[2, 2],
[1, 1],
[3, 3]])
or using the default order:
In [412]: np.stack((x,x), axis=2).reshape((-1,2))
Out[412]:
array([[0, 0],
[1, 1],
[2, 2],
[3, 3]])
numpy is a Python package, using functions, indexing and methods. It doesn't alter or add to the basic Python syntax.

Related

How to slice a numpy array using index arrays with different shapes?

Let's say that we have the following 2d numpy array:
arr = np.array([[1,1,0,1,1],
[0,0,0,1,0],
[1,0,0,0,0],
[0,0,1,0,0],
[0,1,0,0,0]])
and the following indices for rows and columns:
rows = np.array([0,2,4])
cols = np.array([1,2])
The objective is to slice arr using rows and cols to take the following expected result:
arr_sliced = np.array([[1,0],
[0,0],
[1,0]])
Using directly the arrays as indices like arr[rows, cols] leads to:
IndexError: shape mismatch: indexing arrays could not be broadcast together with shapes (3,) (2,)
So what is the straightforward way to achieve this kind of slicing?
Update: useful information about the solution
So the solution was simple enough and it demands a basic comprehension about numpy's broadcasting. Someone could read these nice but not so representative examples from numpy. Also, the general broadcasting rules explains why there is no shape mismatch in:
arr[rows[:, np.newaxis], cols]
# rows[:, np.newaxis].shape == (3,1)
# cols.shape == (2,)
You can use:
arr[rows[:,None], cols[None]]
Output:
array([[1, 0],
[0, 0],
[1, 0]])
It looks like it is much quicker than indexing for large arrays.
arr[np.ix_([0,2,4],[1,2])]
array([[1, 0],
[0, 0],
[1, 0]])
document: https://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.ix_.html
This function takes N 1-D sequences and returns N outputs with N dimensions each, such that the shape is 1 in all but one dimension and the dimension with the non-unit shape value cycles through all N dimensions.

Is there any way to vectorize a rolling cross-correlation in python based on my example?

Let's suppose I have two arrays that represent pixels in pictures.
I want to build an array of tensordot products of pixels of a smaller picture with a bigger picture as it "scans" the latter. By "scanning" I mean iteration over rows and columns while creating overlays with the original picture.
For instance, a 2x2 picture can be overlaid on top of 3x3 in four different ways, so I want to produce a four-element array that contains tensordot products of matching pixels.
Tensordot is calculated by multiplying a[i,j] with b[i,j] element-wise and summing the terms.
Please examine this code:
import numpy as np
a = np.array([[0,1,2],
[3,4,5],
[6,7,8]])
b = np.array([[0,1],
[2,3]])
shape_diff = (a.shape[0] - b.shape[0] + 1,
a.shape[1] - b.shape[1] + 1)
def compute_pixel(x,y):
sub_matrix = a[x : x + b.shape[0],
y : y + b.shape[1]]
return np.tensordot(sub_matrix, b, axes=2)
def process():
arr = np.zeros(shape_diff)
for i in range(shape_diff[0]):
for j in range(shape_diff[1]):
arr[i,j]=compute_pixel(i,j)
return arr
print(process())
Computing a single pixel is very easy, all I need is the starting location coordinates within a. From there I match the size of the b and do a tensordot product.
However, because I need to do this all over again for each x and y location as I'm iterating over rows and columns I've had to use a loop, which is of course suboptimal.
In the next piece of code I have tried to utilize a handy feature of tensordot, which also accepts tensors as arguments. In order words I can feed an array of arrays for different combinations of a, while keeping the b the same.
Although in order to create an array of said combination, I couldn't think of anything better than using another loop, which kind of sounds silly in this case.
def try_vector():
tensor = np.zeros(shape_diff + b.shape)
for i in range(shape_diff[0]):
for j in range(shape_diff[1]):
tensor[i,j]=a[i: i + b.shape[0],
j: j + b.shape[1]]
return np.tensordot(tensor, b, axes=2)
print(try_vector())
Note: tensor size is the sum of two tuples, which in this case gives (2, 2, 2, 2)
Yet regardless, even if I produced such array, it would be prohibitively large in size to be of any practical use. For doing this for a 1000x1000 picture, could probably consume all the available memory.
So, is there any other ways to avoid loops in this problem?
In [111]: process()
Out[111]:
array([[19., 25.],
[37., 43.]])
tensordot with 2 is the same as element multiply and sum:
In [116]: np.tensordot(a[0:2,0:2],b, axes=2)
Out[116]: array(19)
In [126]: (a[0:2,0:2]*b).sum()
Out[126]: 19
A lower-memory way of generating your tensor is:
In [121]: np.lib.stride_tricks.sliding_window_view(a,(2,2))
Out[121]:
array([[[[0, 1],
[3, 4]],
[[1, 2],
[4, 5]]],
[[[3, 4],
[6, 7]],
[[4, 5],
[7, 8]]]])
We can do a broadcasted multiply, and sum on the last 2 axes:
In [129]: (Out[121]*b).sum((2,3))
Out[129]:
array([[19, 25],
[37, 43]])

How does slicing numpy arrays with other arrays work?

I have a numpy array of shape [batch_size, timesteps_per_samples, width, height], where width and height refer to a 2D grid. The values in this array can be interpreted as an elevation at a certain location that changes over time.
I want to know the elevation over time for various paths within this array. Therefore i have a second array of shape [batch_size, paths_per_batch_sample, timesteps_per_path, coordinates] (coordinates = 2, for x and y in the 2D plane).
The resulting array should be of shape [batch_size, paths_per_batch_sample, timesteps_per_path] containing the elevation over time for each sample within the batch.
The following two examples work. The first one is very slow and just serves for understanding what I am trying to do. I think the second one does what I want but I have no idea why this works nor if it may crash under certain circumstances.
Code for the problem setup:
import numpy as np
batch_size=32
paths_per_batch_sample=10
timesteps_per_path=4
width=64
height=64
elevation = np.arange(0, batch_size*timesteps_per_path*width*height, 1)
elevation = elevation.reshape(batch_size, timesteps_per_path, width, height)
paths = np.random.randint(0, high=width-1, size=(batch_size, paths_per_batch_sample, timesteps_per_path, 2))
range_batch = range(batch_size)
range_paths = range(paths_per_batch_sample)
range_timesteps = range(timesteps_per_path)
The following code works but is very slow:
elevation_per_time = np.zeros((batch_size, paths_per_batch_sample, timesteps_per_path))
for s in range_batch:
for k in range_paths:
for t in range_timesteps:
x_co, y_co = paths[s,k,t,:].astype(int)
elevation_per_time[s,k,t] = elevation[s,t,x_co,y_co]
The following code works (even fast) but I can't understand why and how o.0
elevation_per_time_fast = elevation[
:,
range_timesteps,
paths[:, :, range_timesteps, 0].astype(int),
paths[:, :, range_timesteps, 1].astype(int),
][range_batch, range_batch, :, :]
Prove that the results are equal
check = (elevation_per_time == elevation_per_time_fast)
print(np.all(check))
Can somebody explain how I can slice an nd-array by multiple other arrays?
Especially, I don't understand how the numpy knows that 'range_timesteps' has to run in step (for the index in axis 1,2,3).
Thanks in advance!
Lets take a quick look at slicing numpy array first:
a = np.arange(0,9,1).reshape([3,3])
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
Numpy has 2 ways of slicing array, full sections start:stop and by index from a list [index1, index2 ...]. The output will still be an array with the shape of your slice:
a[0:2,:]
array([[0, 1, 2],
[3, 4, 5]])
a[:,[0,2]]
array([[0, 2],
[3, 5],
[6, 8]])
The second part is that since you get a returned array with the same amount of dimensions you can easily stack any number of slices as long as you dont try to directly access an index outside of the array.
a[:][:][:][:][:][:][:][[0,2]][:,[0,2]]
array([[0, 2],
[6, 8]])

Add lists of numpy arrays element-wise

I've been working on an algorithm for backpropagation in neural networks. My program calculates the partial derivative of each weight with respect to the loss function, and stores it in an array. The weights at each layer are stored in a single 2d numpy array, and so the partial derivatives are stored as an array of numpy arrays, where each numpy array has a different size depending on the number of neurons in each layer.
When I want to average the array of partial derivatives after a number of training data has been used, I want to add each array together and divide by the number of arrays. Currently, I just iterate through each array and add each element together, but is there a quicker way? I could use ndarray with dtype=object but apparently, this has been deprecated.
For example, if I have the arrays:
arr1 = [ndarray([[1,1],[1,1],[1,1]]), ndarray([[2,2],[2,2]])]
arr2 = [ndarray([[3,3],[3,3],[3,3]]), ndarray([[4,4],[4,4]])]
How can I add these together to get the array:
arr3 = [ndarray([[4,4],[4,4],[4,4]]), ndarray([[6,6],[6,6]])]
You don't need to add the numbers in the array element-wise, make use of numpy's parallel computations by using numpy.add
Here's some code to do just that:
import numpy as np
arr1 = np.asarray([[[1,1],[1,1],[1,1]], [[2,2],[2,2]]])
arr2 = np.asarray([[[3,3],[3,3],[3,3]], [[4,4],[6,6]]])
ans = []
for first, second in zip(arr1, arr2):
ans.append(np.add(first,second))
Outputs:
>>> [array([[4, 4], [4, 4], [4, 4]]), array([[6, 6], [8, 8]])]
P.S
Could use a one-liner list-comprehension as well
ans = [np.add(first, second) for first, second in zip(arr1, arr2)]
You can use zip/map/sum:
import numpy as np
arr1 = [np.array([[1,1],[1,1],[1,1]]), np.array([[2,2],[2,2]])]
arr2 = [np.array([[3,3],[3,3],[3,3]]), np.array([[4,4],[4,4]])]
arr3 = list(map(sum, zip(arr1, arr2)))
output:
>>> arr3
[array([[4, 4],
[4, 4],
[4, 4]]),
array([[6, 6],
[6, 6]])]
In NumPy, you can add two arrays element-wise by adding two NumPy arrays.
N.B: if your array shape varies then reshape the array and fill with 0.
arr1 = np.array([np.array([[1,1],[1,1],[1,1]]), np.array([[2,2],[2,2]])])
arr2 = np.array([np.array([[3,3],[3,3],[3,3]]), np.array([[4,4],[4,4]])])
arr3 = arr2 + arr1
You can use a list comprehension:
[x + y for x, y in zip(arr1, arr2)]

What are the efficient ways to assign values to 2D numpy arrays as functions of indicies

It may be a stupid question but I couldn't find a similar question asked(for now).
For example, I define as function called f(x,y)
def f(x, y):
return x+y
Now I want to output a 2D numpy array, the value of an element is equal to its indices summed, for example, if I want a 2x2 array:
arr = [[0, 1],
[1, 2]]
If I want a 3x3 array, then the output should be:
arr = [[0, 1, 2],
[1, 2, 3],
[2, 3, 4]]
It's not efficient to assign the values one by one, especially if the array size is large, say 10000*10000, which is also a waste of the quick speed of numpy. Although it sounds quite basic but I can't think of a simple and quick solution to it. What is the most common and efficient way to do it?
By the way, the summing indices just an example. I hope that the method can also be generalized to arbitrary functions like, say,
def f(x,y):
return np.cos(x)+np.sin(y)
Or even to higher dimensional arrays, like 4x4 arrays.
You can use numpy.indices, which returns an array representing the indices of a grid; you'll just need to sum along the 0 axis:
>>> a = np.random.random((2,2))
>>> np.indices(a.shape).sum(axis=0) # array([[0, 1], [1, 2]])
>>> a = np.random.random((3,3))
>>> np.indices((3,3)).sum(axis=0) #array([[0, 1, 2], [1, 2, 3], [2, 3, 4]])

Categories

Resources