How to construct diagonal above the main diagonal using numpy and python? - python

I want to generate an array A defined as
A = [[1, 2, 0], [3, 1, 2], [3, 3, 1]]
I have tried to construct it as
A = np.diag(np.ones(3), k=0)
What should I do to construct the diagonal above the main diagonal? if I mention k=0 it returns the values of diagonal above the main diagonal

you can use .fill_diagonal and some 2-d slicing to fill diagonals that are not the main one.
try this:
import numpy as np
A = np.diag(np.ones(3), k=0)
np.fill_diagonal(A[1:, :], np.ones(3) * 3)
np.fill_diagonal(A[:, 1:], np.ones(3) * 2)
print(A)

Related

pytorch batchwise indexing

I am searching for a way to do some batchwise indexing for tensors.
If I have a variable Q of size 1000, I can get the elements I want by
Q[index], where index is a vector of the wanted elements.
Now I would like to do the same for more dimensional tensors.
So suppose Q is of shape n x m and I have a index matrix of shape n x p.
My goal is to get for each of the n rows the specific p elements out of the m elements.
But Q[index] is not working for this situation.
Do you have any thoughts how to handle this?
You can seem to be a simple application of torch.gather which doesn't require any additional reshaping of the data or index tensor:
>>> Q = torch.rand(5, 4)
tensor([[0.8462, 0.3064, 0.2549, 0.2149],
[0.6801, 0.5483, 0.5522, 0.6852],
[0.1587, 0.4144, 0.8843, 0.6108],
[0.5265, 0.8269, 0.8417, 0.6623],
[0.8549, 0.6437, 0.4282, 0.2792]])
>>> index
tensor([[0, 1, 2],
[2, 3, 1],
[0, 1, 2],
[2, 2, 2],
[1, 1, 2]])
The following gather operation applied on dim=1 return a tensor out, such that:
out[i, j] = Q[i, index[i,j]]
This is done with the following call of torch.Tensor.gather on Q:
>>> Q.gather(dim=1, index=index)
tensor([[0.8462, 0.3064, 0.2549],
[0.5522, 0.6852, 0.5483],
[0.1587, 0.4144, 0.8843],
[0.8417, 0.8417, 0.8417],
[0.6437, 0.6437, 0.4282]])

How to concatenate an array into first position of n-d array in numpy?

I need to add extra element to first position of n-d array in numpy.
Here is the code:
tmp_boxes3d = [None,6]
tmp_scores = [None]
Now I need to add tmp_scores element into first position of tmp_boxes3d.
For better understanding
boxes = np.array([[1,2,3,4],[5,6,7,8]])
scores = np.array([0,0])
boxes_scores = [[0,1,2,3,4],[0,5,6,7,8]] # result
So I need to do for [None,6] shape.
Can anyone help me with this?
Try np.concatenate
boxes = np.array([[1,2,3,4],[5,6,7,8]])
scores = np.array([0,0])
np.concatenate((scores[:, np.newaxis], boxes), axis=1)
array([[0, 1, 2, 3, 4],
[0, 5, 6, 7, 8]])
Or, np.hstack
np.hstack((scores[:, np.newaxis], boxes))

Finding an index numpy python

Consider a NumPy array of shape (8, 8).
My Question: What is the index (x,y) of the 50th element?
Note: For counting the elements go row-wise.
Example, in array A, where A = [[1, 5, 9], [3, 0, 2]] the 5th element would be '0'.
Can someone explain how to find the general solution for this and, what would be the solution for this specific problem?
You can use unravel_index to find the coordinates corresponding to the index of the flattened array. Usually np.arrays start with index 0, you have to adjust for this.
import numpy as np
a = np.arange(64).reshape(8,8)
np.unravel_index(50-1, a.shape)
Out:
(6, 1)
In a NumPy array a of shape (r, c) (just like a list of lists), the n-th element is
a[(n-1) // c][(n-1) % c],
assuming that n starts from 1 as in your example.
It has nothing to do with r. Thus, when r = c = 8 and n = 50, the above formula is exactly
a[6][1].
Let me show more using your example:
from numpy import *
a = array([[1, 5, 9], [3, 0, 2]])
r = len(a)
c = len(a[0])
print(f'(r, c) = ({r}, {c})')
print(f'Shape: {a.shape}')
for n in range(1, r * c + 1):
print(f'Element {n}: {a[(n-1) // c][(n-1) % c]}')
Below is the result:
(r, c) = (2, 3)
Shape: (2, 3)
Element 1: 1
Element 2: 5
Element 3: 9
Element 4: 3
Element 5: 0
Element 6: 2
numpy.ndarray.faltten(a) returns a copy of the array a collapsed into one dimension. And please note that the counting starts from 0, therefore, in your example 0 is the 4th element and 1 is the 0th.
import numpy as np
arr = np.array([[1, 5, 9], [3, 0, 2]])
fourth_element = np.ndarray.flatten(arr)[4]
or
fourth_element = arr.flatten()[4]
the same for 8x8 matrix.
First need to create a 88 order 2d numpy array using np.array and range.Reshape created array as 88
In the output you check index of 50th element is [6,1]
import numpy as np
arr = np.array(range(1,(8*8)+1)).reshape(8,8)
print(arr[6,1])
output will be 50
or you can do it in generic way as well by the help of numpy where method.
import numpy as np
def getElementIndex(array: np.array, element):
elementIndex = np.where(array==element)
return f'[{elementIndex[0][0]},{elementIndex[1][0]}]'
def getXYOrderNumberArray(x:int, y:int):
return np.array(range(1,(x*y)+1)).reshape(x,y)
arr = getXYOrderNumberArray(8,8)
print(getElementIndex(arr,50))

Extract sub arrays based on kernel in numpy

I would like to know if there is an efficient method to get sub-arrays from a larger numpy array.
What I have is an application of np.where. I iterate 'manually' over x and y as offsets and apply where with a kernel to each rectangle extracted from the larger array with proper dimensions.
But is there a more direct approach in numpy's collection of methods?
import numpy as np
example = np.arange(20).reshape((5, 4))
# e.g. a cross kernel
a_kernel = np.asarray([[0, 1, 0], [1, 1, 1], [0, 1, 0]])
np.where(a_kernel, example[1:4, 1:4], 0)
# returns
# array([[ 0, 6, 0],
# [ 9, 10, 11],
# [ 0, 14, 0]])
def arrays_from_kernel(a, a_kernel):
width, height = a_kernel.shape
y_max, x_max = a.shape
return [np.where(a_kernel, a[y:(y + height), x:(x + width)], 0)
for y in range(y_max - height + 1)
for x in range(x_max - width + 1)]
sub_arrays = arrays_from_kernel(example, a_kernel)
This returns the arrays I need for further processing.
# [array([[0, 1, 0],
# [4, 5, 6],
# [0, 9, 0]]),
# array([[ 0, 2, 0],
# [ 5, 6, 7],
# [ 0, 10, 0]]),
# ...
# array([[ 0, 9, 0],
# [12, 13, 14],
# [ 0, 17, 0]]),
# array([[ 0, 10, 0],
# [13, 14, 15],
# [ 0, 18, 0]])]
The context: similar to 2D convolution I would like to apply a custom function on each of the subarrays (e.g. product of squared numbers).
At the moment, you're manually advancing a sliding window over the data - stride tricks to the rescue! (And no, I didn't just make that up - there's actually a submodule called stride_tricks in numpy!) Instead of manually building windows into the data, and calling np.where() on them, if you had the windows in an array, you could call np.where() just once. Stride tricks allow you to create such an array without even having to copy the data.
Let me explain. Normal slices in numpy create views into the original data instead of copies. This is done by referring to the original data, but changing the strides used to access the data (ie. how much to jump between two elements or two rows, and so on). Stride tricks allow you to modify those strides more freely than just slicing and reshaping does, so you can eg. iterate over the same data more than once, which is useful here.
Let me demonstrate:
import numpy as np
example = np.arange(20).reshape((5, 4))
a_kernel = np.array([[0, 1, 0], [1, 1, 1], [0, 1, 0]])
def sliding_window(data, win_shape, **kwargs):
assert data.ndim == len(win_shape)
shape = tuple(dn - wn + 1 for dn, wn in zip(data.shape, win_shape)) + win_shape
strides = data.strides * 2
return np.lib.stride_tricks.as_strided(data, shape=shape, strides=strides, **kwargs)
def arrays_from_kernel(a, a_kernel):
windows = sliding_window(a, a_kernel.shape)
return np.where(a_kernel, windows, 0)
sub_arrays = arrays_from_kernel(example, a_kernel)
The scipy.ndimage module offers a number of filters -- one of which might meet your needs. If none of those filters do what you want, you could use ndimage.generic_filter
to call a custom function on each subarray. ndimage.generic_filter is not as fast as the other ndimage filters, however.
For example,
import numpy as np
example = np.arange(20).reshape((5, 4))
a_kernel = np.asarray([[0, 1, 0], [1, 1, 1], [0, 1, 0]])
# def arrays_from_kernel(a, a_kernel):
# width, height = a_kernel.shape
# y_max, x_max = a.shape
# return [np.where(a_kernel, a[y:(y + height), x:(x + width)], 0)
# for y in range(y_max - height + 1)
# for x in range(x_max - width + 1)]
# sub_arrays = arrays_from_kernel(example, a_kernel)
# for arr in sub_arrays:
# print(arr)
# print('-'*80)
import scipy.ndimage as ndimage
def func(x):
# reject subarrays that extend beyond the border of the `example` array
if not np.isnan(x).any():
y = np.zeros_like(a_kernel, dtype=example.dtype)
np.put(y, np.flatnonzero(a_kernel), x)
print(y)
# Instead or returning 0, you can perform your desired computation on the subarray here.
# Note that you may not need the 2D array y; often, you only need the values in the 1D array x
return 0
result = ndimage.generic_filter(example, func, footprint=a_kernel, mode='constant', cval=np.nan)
For the particular problem of computing the product of squares for each subarray, you
could convert the product into a sum by taking advantage of the fact that A * B = exp(log(A)+log(B)). This would allow you to express the computation as a normal convolution. Now using ndimage.convolve can improve performance a lot. The amount of the improvement depends on the size of example:
import numpy as np
import scipy.ndimage as ndimage
import perfplot
a_kernel = np.asarray([[0, 1, 0], [1, 1, 1], [0, 1, 0]])
def orig(example, a_kernel=a_kernel):
def arrays_from_kernel(a, a_kernel):
width, height = a_kernel.shape
y_max, x_max = a.shape
return [
np.where(a_kernel, a[y : (y + height), x : (x + width)], 1)
for y in range(y_max - height + 1)
for x in range(x_max - width + 1)
]
return [np.prod(x) ** 2 for x in arrays_from_kernel(example, a_kernel)]
def alt(example, a_kernel=a_kernel):
logged = np.log(example)
result = ndimage.convolve(logged, a_kernel, mode="constant", cval=0)[1:-1, 1:-1]
return (np.exp(result) ** 2).ravel()
def make_example(N):
return np.random.random(size=(N, N))
def check(A, B):
return np.allclose(A, B)
perfplot.show(
setup=make_example,
kernels=[orig, alt],
n_range=[2 ** k for k in range(2, 11)],
logx=True,
logy=True,
xlabel="len(example)",
equality_check=check,
)

How do I use numpy vectorize to iterate through a two-dimentional vector?

I am trying to use numpy.vectorize to iterate over a (2x5) matrix which contains two vectors representing the x- and y-values of coordinates. The coordinates (x- and y-value) are to be fed to a function returning a (1x1) vector for each iteration. So that in the end, the result should be a (1x5) vector. My problem is that instead of iterating through each element I want the algorithm to iterate through both vectors simultaneously, so it picks up the x- and y-values of the coordinates in parallel to feed it to the function.
data = np.transpose(np.array([[1, 2], [1, 3], [2, 1], [1, -1], [2, -1]]))
th_ = np.array([[1, 1]])
th0_ = -2
def positive(x, th = th_, th0 = th0_):
if signed_dist(x, th, th0)[0][0] > 0:
return np.array([[1]])
elif signed_dist(x, th, th0)[0][0] == 0:
return np.array([[0]])
else:
return np.array([[-1]])
positive_numpy = np.vectorize(positive)
results = positive_numpy(data)
Reading the numpy documentation did not really help and I want to avoid large workarounds in favor of computation timing. Thankful for any suggestion!
This is a bit of a guess, but looks like your code can be simplified to
data = np.array([[1, 2], [1, 3], [2, 1], [1, -1], [2, -1]]) # (5,2) array
th_ = np.array([[1, 1]])
th0_ = -2
alist = [signed_dist(x, th_, th0_) for x in data]
arr = np.array(alist) # (5,?,?) array
arr = arr[:,0,0] # (5,) array
arr[arr>0] = 1

Categories

Resources