Python get get average of neighbours in matrix with na value - python

I have very large matrix, so dont want to sum by going through each row and column.
a = [[1,2,3],[3,4,5],[5,6,7]]
def neighbors(i,j,a):
return [a[i][j-1], a[i][(j+1)%len(a[0])], a[i-1][j], a[(i+1)%len(a)][j]]
[[np.mean(neighbors(i,j,a)) for j in range(len(a[0]))] for i in range(len(a))]
This code works well for 3x3 or small range of matrix, but for large matrix like 2k x 2k this is not feasible. Also this does not work if any of the value in matrix is missing or it's like na
This code works well for 3x3 or small range of matrix, but for large matrix like 2k x 2k this is not feasible. Also this does not work if any of the value in matrix is missing or it's like na. If any of the neighbor values is na then skip that neighbour in getting the average

Shot #1
This assumes you are looking to get sliding windowed average values in an input array with a window of 3 x 3 and considering only the north-west-east-south neighborhood elements.
For such a case, signal.convolve2d with an appropriate kernel could be used. At the end, you need to divide those summations by the number of ones in kernel, i.e. kernel.sum() as only those contributed to the summations. Here's the implementation -
import numpy as np
from scipy import signal
# Inputs
a = [[1,2,3],[3,4,5],[5,6,7],[4,8,9]]
# Convert to numpy array
arr = np.asarray(a,float)
# Define kernel for convolution
kernel = np.array([[0,1,0],
[1,0,1],
[0,1,0]])
# Perform 2D convolution with input data and kernel
out = signal.convolve2d(arr, kernel, boundary='wrap', mode='same')/kernel.sum()
Shot #2
This makes the same assumptions as in shot #1, except that we are looking to find average values in a neighborhood of only zero elements with the intention to replace them with those average values.
Approach #1: Here's one way to do it using a manual selective convolution approach -
import numpy as np
# Convert to numpy array
arr = np.asarray(a,float)
# Pad around the input array to take care of boundary conditions
arr_pad = np.lib.pad(arr, (1,1), 'wrap')
R,C = np.where(arr==0) # Row, column indices for zero elements in input array
N = arr_pad.shape[1] # Number of rows in input array
offset = np.array([-N, -1, 1, N])
idx = np.ravel_multi_index((R+1,C+1),arr_pad.shape)[:,None] + offset
arr_out = arr.copy()
arr_out[R,C] = arr_pad.ravel()[idx].sum(1)/4
Sample input, output -
In [587]: arr
Out[587]:
array([[ 4., 0., 3., 3., 3., 1., 3.],
[ 2., 4., 0., 0., 4., 2., 1.],
[ 0., 1., 1., 0., 1., 4., 3.],
[ 0., 3., 0., 2., 3., 0., 1.]])
In [588]: arr_out
Out[588]:
array([[ 4. , 3.5 , 3. , 3. , 3. , 1. , 3. ],
[ 2. , 4. , 2. , 1.75, 4. , 2. , 1. ],
[ 1.5 , 1. , 1. , 1. , 1. , 4. , 3. ],
[ 2. , 3. , 2.25, 2. , 3. , 2.25, 1. ]])
To take care of the boundary conditions, there are other options for padding. Look at numpy.pad for more info.
Approach #2: This would be a modified version of convolution based approach listed earlier in Shot #1. This is same as that earlier approach, except that at the end, we selectively replace
the zero elements with the convolution output. Here's the code -
import numpy as np
from scipy import signal
# Inputs
a = [[1,2,3],[3,4,5],[5,6,7],[4,8,9]]
# Convert to numpy array
arr = np.asarray(a,float)
# Define kernel for convolution
kernel = np.array([[0,1,0],
[1,0,1],
[0,1,0]])
# Perform 2D convolution with input data and kernel
conv_out = signal.convolve2d(arr, kernel, boundary='wrap', mode='same')/kernel.sum()
# Initialize output array as a copy of input array
arr_out = arr.copy()
# Setup a mask of zero elements in input array and
# replace those in output array with the convolution output
mask = arr==0
arr_out[mask] = conv_out[mask]
Remarks: Approach #1 would be the preferred way when you have fewer number of zero elements in input array, otherwise go with Approach #2.

This is an appendix to comments under #Divakar's answer (rather than an independent answer).
Out of curiosity I tried different 'pseudo' convolutions against the scipy convolution. The fastest one was the % (modulus) wrapping one, which surprised me: obviously numpy does something clever with its indexing, though obviously not having to pad will save time.
fn3 -> 9.5ms, fn1 -> 21ms, fn2 -> 232ms
import timeit
setup = """
import numpy as np
from scipy import signal
N = 1000
M = 750
P = 5 # i.e. small number -> bigger proportion of zeros
a = np.random.randint(0, P, M * N).reshape(M, N)
arr = np.asarray(a,float)"""
fn1 = """
arr_pad = np.lib.pad(arr, (1,1), 'wrap')
R,C = np.where(arr==0)
N = arr_pad.shape[1]
offset = np.array([-N, -1, 1, N])
idx = np.ravel_multi_index((R+1,C+1),arr_pad.shape)[:,None] + offset
arr[R,C] = arr_pad.ravel()[idx].sum(1)/4"""
fn2 = """
kernel = np.array([[0,1,0],
[1,0,1],
[0,1,0]])
conv_out = signal.convolve2d(arr, kernel, boundary='wrap', mode='same')/kernel.sum()
mask = arr == 0.0
arr[mask] = conv_out[mask]"""
fn3 = """
R,C = np.where(arr == 0.0)
arr[R, C] = (arr[(R-1)%M,C] + arr[R,(C-1)%N] + arr[R,(C+1)%N] + arr[(R+1)%M,C]) / 4.0
"""
print(timeit.timeit(fn1, setup, number = 100))
print(timeit.timeit(fn2, setup, number = 100))
print(timeit.timeit(fn3, setup, number = 100))

Using numpy and scipy.ndimage, you can apply a "footprint" that defines where you look for the neighbours of each element and apply a function to those neighbours:
import numpy as np
import scipy.ndimage as ndimage
# Getting neighbours horizontally and vertically,
# not diagonally
footprint = np.array([[0,1,0],
[1,0,1],
[0,1,0]])
a = [[1,2,3],[3,4,5],[5,6,7]]
# Need to make sure that dtype is float or the
# mean won't be calculated correctly
a_array = np.array(a, dtype=float)
# Can specify that you want neighbour selection to
# wrap around at the borders
ndimage.generic_filter(a_array, np.mean,
footprint=footprint, mode='wrap')
Out[36]:
array([[ 3.25, 3.5 , 3.75],
[ 3.75, 4. , 4.25],
[ 4.25, 4.5 , 4.75]])

Related

Scipy function that can do np.diff() with compressed sparse column matrix

I want to compute discrete difference of identity matrix.
The code below use numpy and scipy.
import numpy as np
from scipy.sparse import identity
from scipy.sparse import csc_matrix
x = identity(4).toarray()
y = csc_matrix(np.diff(x, n=2))
print(y)
I would like to improve performance or memory usage.
Since identity matrix produce many zeros, it would reduce memory usage to perform calculation in compressed sparse column(csc) format. However, np.diff() does not accept csc format, so converting between csc and normal format using csc_matrix would slow it down a bit.
Normal format
x = identity(4).toarray()
print(x)
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
csc format
x = identity(4)
print(x)
(0, 0) 1.0
(1, 1) 1.0
(2, 2) 1.0
(3, 3) 1.0
Thanks
Here is my hacky solution to get the sparse matrix as you want.
L - the length of the original identity matrix,
n - the parameter of np.diff.
In your question they are:
L = 4
n = 2
My code produces the same y as your code, but without the conversions between csc and normal formats.
Your code:
from scipy.sparse import identity, csc_matrix
x = identity(L).toarray()
y = csc_matrix(np.diff(x, n=n))
My code:
from scipy.linalg import pascal
def get_data(n, L):
nums = pascal(n + 1, kind='lower')[-1].astype(float)
minuses_from = n % 2 + 1
nums[minuses_from : : 2] *= -1
return np.tile(nums, L - n)
data = get_data(n, L)
row_ind = (np.arange(n + 1) + np.arange(L - n).reshape(-1, 1)).flatten()
col_ind = np.repeat(np.arange(L - n), n + 1)
y = csc_matrix((data, (row_ind, col_ind)), shape=(L, L - n))
I have noticed that after applying np.diff to the identity matrix n times, the values of the columns are the binomial coefficients with their signs alternating. This is my variable data.
Then I am just constructing the csc_matrix.
Unfortunately, it does not seem that SciPy provides any tools for this kind of sparse matrix manipulation. Regardless, by cleverly manipulating the indices and data of the entries one can emulate np.diff(x,n) in a straightforward fashion.
Given 2D NumPy array (matrix) of dimension MxN, np.diff() multiplies each column (of column index y) with -1 and adds the next column to it (column index y+1). Difference of order k is just the iterative application of k differences of order 1. A difference of order 0 is just the returns the input matrix.
The method below makes use of this, iterateively eliminating duplicate entries by addition through sum_duplicates(), reducing the number of columns by one, and filtering non-valid indices.
def csc_diff(x, n):
'''Emulates np.diff(x,n) for a sparse matrix by iteratively taking difference of order 1'''
assert isinstance(x, csc_matrix) or (isinstance(x, np.ndarray) & len(x.shape) == 2), "Input matrix must be a 2D np.ndarray or csc_matrix."
assert isinstance(n, int) & n >= 0, "Integer n must be larger or equal to 0."
if n >= x.shape[1]:
return csc_matrix(([], ([], [])), shape=(x.shape[0], 0))
if isinstance(x, np.ndarray):
x = csc_matrix(x)
# set-up of data/indices via column-wise difference
if(n > 0):
for k in range(1,n+1):
# extract data/indices of non-zero entries of (current) sparse matrix
M, N = x.shape
idx, idy = x.nonzero()
dat = x.data
# difference: this row (y) * (-1) + next row (y+1)
idx = np.concatenate((idx, idx))
idy = np.concatenate((idy, idy-1))
dat = np.concatenate(((-1)*dat, dat))
# filter valid indices
validInd = (0<=idy) & (idy<N-1)
# x_diff: csc_matrix emulating np.diff(x,1)'s output'
x_diff = csc_matrix((dat[validInd], (idx[validInd], idy[validInd])), shape=(M, N-1))
x_diff.sum_duplicates()
x = x_diff
return x
Moreover, the method outputs an empty csc_matrix of dimension Mx0 when the difference order is larger or equal to the number of columns of the input matrix. This is why the output is identical, see
csc_diff(x, 2).toarray()
> array([[ 1., 0.],
[-2., 1.],
[ 1., -2.],
[ 0., 1.]])
which is identical to
np.diff(x.toarray(), 2)
> array([[ 1., 0.],
[-2., 1.],
[ 1., -2.],
[ 0., 1.]])
This identity holds for other difference orders, too
(csc_diff(x, 0).toarray() == np.diff(x.toarray(), 0)).all()
>True
(csc_diff(x, 3).toarray() == np.diff(x.toarray(), 3)).all()
>True
(csc_diff(x, 13).toarray() == np.diff(x.toarray(), 13)).all()
>True

why does np.convolve shift the resulted signal by 1

I have the following two signals:
X0 = array([0., 0., 0., 0., 0., 1., 0., 0., 0., 0.])
rbf_kernel = array([2.40369476e-04, 4.82794999e-03, 4.97870684e-02, 2.63597138e-01,
7.16531311e-01, 1.00000000e+00, 7.16531311e-01, 2.63597138e-01,
4.97870684e-02, 4.82794999e-03])
I tried to convolve the two signals using np.convolve(X0, rbf_kernel, mode='same') but the resulted convolution is shifted by one to the right as shown below. Green, orange, blue curves are X0, rbf_kernel, and the result from the last command line respectively. I expect to see the maximum convolution when the two convoluted signals were matched (i.e, at point 5) but that did not happen.
The result is shifted because of padding used for same convolution. Convolution is a process of sliding flipped kernel on input and taking dot product at each step.
For valid convolution kernel should overlap at every stride fully hence output size will be n - m + 1 (n - len(input), m - len(kernel) assuming m <= n). For same convolution output size will be max(m, n) to achieve that we need to apply (m - 1) zero padding on input and then perform valid convolution.
In your example n = m = 10 and same convolution output size will be max(10, 10) = 10. It requires zero padding of m - 1 = 9 which is 5 left zero padding and 4 right zero padding. Padded input(X0) looks like :
padded_x = [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.] with length 19.
flipped kernel = [4.82794999e-03 4.97870684e-02 2.63597138e-01 7.16531311e-0, 1.00000000e+00 7.16531311e-01 2.63597138e-01 4.97870684e-02
4.82794999e-03 2.40369476e-04]
So on convolution output will be maximum at 6th step(starting from 0)
Here's a sample SAME convolution code:
import numpy as np
import matplotlib.pyplot as plt
def same_conv(x, k):
if len(k) > len(x):
# consider longer as x and other as kernel
x, k = k, x
n = x.shape[0]
m = k.shape[0]
padding = m - 1
left_pad = int(np.ceil(padding / 2))
right_pad = padding - left_pad
x = np.pad(x, (left_pad, right_pad), 'constant')
# print(len(x))
out = []
# flip the kernel
k = k[::-1]
# print(k)
for i in range(n):
out.append(np.dot(x[i: i+m], k))
return np.array(out)
X0 = np.array([0., 0., 0., 0., 0., 1., 0., 0., 0., 0.])
rbf_kernel = np.array([2.40369476e-04, 4.82794999e-03, 4.97870684e-02, 2.63597138e-01,
7.16531311e-01, 1.00000000e+00, 7.16531311e-01, 2.63597138e-01,
4.97870684e-02, 4.82794999e-03])
convolved = same_conv(X0, rbf_kernel)
plt.plot(X0)
plt.plot(rbf_kernel)
plt.plot(convolved)
plt.show()
which results in the same shifted output as yours.

Optimize NumPy iterating through image and changing values

Is there a way to optimize this snippet of code? With my current im value, it is taking me ~28 seconds. Wondering if I could reduce this time.
im = output_image[-min_xy[1]:-min_xy[1] + image_2.shape[0], -min_xy[0]:-min_xy[0] + image_2.shape[1]]
for idx, rk in enumerate(im):
for ix, k in enumerate(rk):
image_2[idx][ix] = avg(im[idx][ix], image_2[idx][ix])
type(image_2) and type(im) is <type 'numpy.ndarray'>
im.shape and image_2.shape is (2386, 3200, 3)
what my avg() does is
def avg(a1, a2):
if [0., 0., 0.] in a1:
return a2
else:
return (a1 + a2) / 2
NOTE: a1 is an array of size 3 ex: array([ 0.68627451, 0.5372549 , 0.4745098])
The only obstacle to vectorization seemed to be that IF conditional in avg. To get past it, simply use the choosing capability of np.where and thus have our solution like so -
avgs = (im + image_2)/2.0
image_2_out = np.where((im == 0).any(-1,keepdims=1), image_2, avgs)
Note that this assumes with if [0., 0., 0.] in a1, you meant to check for ANY one match. If you meant to check for ALL zeros, simply use .all instead of .any.
Alternatively, to do in-situ edits in image_2, use a mask for boolean-indexing -
mask = ~(im == 0).any(-1)
image_2[mask] = avgs[mask]

Calculate euclidean norm of 100 points of n-dimensions?

I have a list of 100 values in python where each value in the list corresponds to an n-dimensional list.
For e.g
x=[[1 2],[2 3]] is a 2d list
I want to compute euclidean norm over all such points. Is there a standard method to do this?
I found this on scipy and this works.
scipy
If I have interpreted the question correctly, then you have a list of 100 n-dimensional vectors, and you would like a list of their (Euclidean) norms.
I think using numpy is easiest (and quickest!) here,
import numpy as np
a = np.array(x)
np.sqrt((a*a).sum(axis=1))
If the vectors do not have equal dimension, or if you want to avoid numpy, then perhaps,
[sum([i*i for i in vec])**0.5 for vec in x]
or,
import math
[math.sqrt(sum([i*i for i in vec])) for vec in x]
Edit: Not entirely sure what you were asking for. So, alternatively: it looks like you have a list, each element of which is an n-dimensional vector, and you want the Euclidean distance between each consecutive pair. With numpy (assuming n is fixed),
x = [ [1,2,3], [4,5,6], [8,9,10], [13,14,15] ] # 3D example.
import numpy as np
a = np.array(x)
sqrDiff = (a[:-1] - a[1:])**2
np.sqrt(sqrDiff.sum(axis=1))
where the last line returns,
array([ 5.19615242, 6.92820323, 8.66025404])
Try this code:
from math import sqrt
valueList = [[[1,2], [2,3]], [[2,2], [3,3]]]
def distance(valueList):
resultList = []
for (point1, point2) in valueList:
resultList.append(sqrt(sum(map(lambda (x1, x2): (x1 - x2) * (x1 - x2), zip(point1, point2)))))
return resultList
print distance(valueList)
output is
[1.4142135623730951, 1.4142135623730951]
Here is valuelist contains 2 values, but no problem with 100 values..
You can do this to compute the euclidean norm of each row:
>>> a = np.arange(200.).reshape((100,2))
>>> a
array([[ 0., 1.],
[ 2., 3.],
[ 4., 5.],
[ 6., 7.],
[ 8., 9.],
[ 10., 11.],
...
>>> np.sum(a**2,axis=-1) ** .5
array([ 1. , 3.60555128, 6.40312424, 9.21954446,
12.04159458, 14.86606875, 17.69180601, 20.51828453,
23.34523506, 26.17250466, 29. , 31.82766093,
34.6554469 , 37.48332963, 40.31128874, 43.13930922,
45.96737974, 48.7954916 , 51.623638 , 54.45181356,
...

How to normalize a 2-dimensional numpy array in python less verbose?

Given a 3 times 3 numpy array
a = numpy.arange(0,27,3).reshape(3,3)
# array([[ 0, 3, 6],
# [ 9, 12, 15],
# [18, 21, 24]])
To normalize the rows of the 2-dimensional array I thought of
row_sums = a.sum(axis=1) # array([ 9, 36, 63])
new_matrix = numpy.zeros((3,3))
for i, (row, row_sum) in enumerate(zip(a, row_sums)):
new_matrix[i,:] = row / row_sum
There must be a better way, isn't there?
Perhaps to clearify: By normalizing I mean, the sum of the entrys per row must be one. But I think that will be clear to most people.
Broadcasting is really good for this:
row_sums = a.sum(axis=1)
new_matrix = a / row_sums[:, numpy.newaxis]
row_sums[:, numpy.newaxis] reshapes row_sums from being (3,) to being (3, 1). When you do a / b, a and b are broadcast against each other.
You can learn more about broadcasting here or even better here.
Scikit-learn offers a function normalize() that lets you apply various normalizations. The "make it sum to 1" is called L1-norm. Therefore:
from sklearn.preprocessing import normalize
matrix = numpy.arange(0,27,3).reshape(3,3).astype(numpy.float64)
# array([[ 0., 3., 6.],
# [ 9., 12., 15.],
# [ 18., 21., 24.]])
normed_matrix = normalize(matrix, axis=1, norm='l1')
# [[ 0. 0.33333333 0.66666667]
# [ 0.25 0.33333333 0.41666667]
# [ 0.28571429 0.33333333 0.38095238]]
Now your rows will sum to 1.
I think this should work,
a = numpy.arange(0,27.,3).reshape(3,3)
a /= a.sum(axis=1)[:,numpy.newaxis]
In case you are trying to normalize each row such that its magnitude is one (i.e. a row's unit length is one or the sum of the square of each element in a row is one):
import numpy as np
a = np.arange(0,27,3).reshape(3,3)
result = a / np.linalg.norm(a, axis=-1)[:, np.newaxis]
# array([[ 0. , 0.4472136 , 0.89442719],
# [ 0.42426407, 0.56568542, 0.70710678],
# [ 0.49153915, 0.57346234, 0.65538554]])
Verifying:
np.sum( result**2, axis=-1 )
# array([ 1., 1., 1.])
I think you can normalize the row elements sum to 1 by this:
new_matrix = a / a.sum(axis=1, keepdims=1).
And the column normalization can be done with new_matrix = a / a.sum(axis=0, keepdims=1). Hope this can hep.
You could use built-in numpy function:
np.linalg.norm(a, axis = 1, keepdims = True)
it appears that this also works
def normalizeRows(M):
row_sums = M.sum(axis=1)
return M / row_sums
You could also use matrix transposition:
(a.T / row_sums).T
Here is one more possible way using reshape:
a_norm = (a/a.sum(axis=1).reshape(-1,1)).round(3)
print(a_norm)
Or using None works too:
a_norm = (a/a.sum(axis=1)[:,None]).round(3)
print(a_norm)
Output:
array([[0. , 0.333, 0.667],
[0.25 , 0.333, 0.417],
[0.286, 0.333, 0.381]])
Use
a = a / np.linalg.norm(a, ord = 2, axis = 0, keepdims = True)
Due to the broadcasting, it will work as intended.
Or using lambda function, like
>>> vec = np.arange(0,27,3).reshape(3,3)
>>> import numpy as np
>>> norm_vec = map(lambda row: row/np.linalg.norm(row), vec)
each vector of vec will have a unit norm.
We can achieve the same effect by premultiplying with the diagonal matrix whose main diagonal is the reciprocal of the row sums.
A = np.diag(A.sum(1)**-1) # A

Categories

Resources