How to compute a spatial distance matrix from a given value - python

I've been looking for a way to (efficiently) compute a distance matrix from a target value and an input matrix.
If you consider an input array as:
[0 0 1 2 5 2 1]
[0 0 2 3 5 2 1]
[0 1 1 2 5 4 1]
[1 1 1 2 5 4 0]
Ho do you compute the spatial distance matrix associated to the target value 0?
i.e. what is the distance from each pixel to the closest 0 value?
Thanks in advance

You are looking for scipy.ndimage.morphology.distance_transform_edt. It operates on a binary array and computes euclidean distances on each TRUE position to the nearest background FALSE position. In our case, since we want to find out distances from nearest 0s, so the background is 0. Now, under the hoods, it converts the input to a binary array assuming 0 as the background, so we can just use it with the default parameters. Hence, it would be as simple as -
In [179]: a
Out[179]:
array([[0, 0, 1, 2, 5, 2, 1],
[0, 0, 2, 3, 5, 2, 1],
[0, 1, 1, 2, 5, 4, 1],
[1, 1, 1, 2, 5, 4, 0]])
In [180]: from scipy import ndimage
In [181]: ndimage.distance_transform_edt(a)
Out[181]:
array([[0. , 0. , 1. , 2. , 3. , 3.16, 3. ],
[0. , 0. , 1. , 2. , 2.83, 2.24, 2. ],
[0. , 1. , 1.41, 2.24, 2.24, 1.41, 1. ],
[1. , 1.41, 2.24, 2.83, 2. , 1. , 0. ]])
Solving for generic case
Now, let's say we want to find out distances from nearest 1s, then it would be -
In [183]: background = 1 # element from which distances are to be computed
# compare this with original array, a to verify
In [184]: ndimage.distance_transform_edt(a!=background)
Out[184]:
array([[2. , 1. , 0. , 1. , 2. , 1. , 0. ],
[1.41, 1. , 1. , 1.41, 2. , 1. , 0. ],
[1. , 0. , 0. , 1. , 2. , 1. , 0. ],
[0. , 0. , 0. , 1. , 2. , 1.41, 1. ]])

Related

Printing locations containing non-zero elements in Python

The following code prints row numbers solution1 which have at least one non-zero element. However, corresponding to these row numbers, how do I also print which locations have non-zero elements solution2 as shown in the expected output.? For instance, row 1 has non-zero elements at locations [1,3,4,6], row 2 has non-zero elements at locations [0,2,3,5].
import numpy as np
A=np.array([[ 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. ],
[ 0. , 423.81345923, 0. , 407.01354328,
419.14952534, 0. , 212.13245959, 0. ,
0. , 0. , 0. , 0. ],
[402.93473651, 0. , 216.08166277, 407.01354328,
0. , 414.17017965, 0. , 0. ,
0. , 0. , 0. , 0. ]])
solution1 = []
for idx, e in enumerate(A):
if any(e):
solution1.append(idx)
print("solution 1 =",solution1)
The current output is
solution 1 = [1,2]
The expected output is
solution 1 = [1,2]
solution 2 = [[1,3,4,6],[0,2,3,5]]
Use np.where to find all coordinates for non zero values first, and then split y index by rows:
idx, idy = np.where(A)
np.split(idy, np.flatnonzero(np.diff(idx) != 0) + 1)
# [array([1, 3, 4, 6], dtype=int32), array([0, 2, 3, 5], dtype=int32)]

Modifying (keras/tensorflow) Tensors using numpy methods

I want to perform a specific operation. Namely, from a matrix:
A = np.array([[1,2],
[3,4]])
To the following
B = np.array([[1, 0, 0, 2, 0, 0],
[0, 1, 0, 0, 2, 0],
[0, 0, 1, 0, 0, 2],
[3, 0, 0, 4, 0, 0],
[0, 3, 0, 0, 4, 0],
[0, 0, 3, 0, 0, 4]])
Or in words: multiply every entry by the identity matrix and keep the same order.
Now I have accomplished this by using numpy, using the following code. Here N and M are the dimensions of the starting matrix, and the dimension of the identity matrix.
l_slice = 3
n_slice = 2
A = np.reshape(np.arange(1, 1+N ** 2), (N, N))
B = np.array([i * np.eye(M) for i in A.flatten()])
C = B.reshape(N, N, M, M).reshape(N, N * M, M).transpose([0, 2, 1]).reshape((N * M, N * M))
where C has my desired properties.
But now I want do this modification in Keras/Tensorflow, where the matrix A is the outcome of one of my layers.
However, I am not sure yet if I will be able to properly create matrix B. Especially when batches are involved, I think I will somehow mess up the dimensions of my problem.
Can anyone with more Keras/Tensorflow experience comment on this 'reshape' and how he/she sees this happening within Keras/Tensorflow?
Here is a way to do that with TensorFlow:
import tensorflow as tf
data = tf.placeholder(tf.float32, [None, None])
n = tf.placeholder(tf.int32, [])
eye = tf.eye(n)
mult = data[:, tf.newaxis, :, tf.newaxis] * eye[tf.newaxis, :, tf.newaxis, :]
result = tf.reshape(mult, n * tf.shape(data))
with tf.Session() as sess:
a = sess.run(result, feed_dict={data: [[1, 2], [3, 4]], n: 3})
print(a)
Output:
[[1. 0. 0. 2. 0. 0.]
[0. 1. 0. 0. 2. 0.]
[0. 0. 1. 0. 0. 2.]
[3. 0. 0. 4. 0. 0.]
[0. 3. 0. 0. 4. 0.]
[0. 0. 3. 0. 0. 4.]]
By the way, you can do basically the same in NumPy, which should be faster than your current solution:
import numpy as np
data = np.array([[1, 2], [3, 4]])
n = 3
eye = np.eye(n)
mult = data[:, np.newaxis, :, np.newaxis] * eye[np.newaxis, :, np.newaxis, :]
result = np.reshape(mult, (n * data.shape[0], n * data.shape[1]))
print(result)
# The output is the same as above
EDIT:
I'll try to give some intuition about why/how this works, sorry if it's too long. It is not that hard but I think it's sort of tricky to explain. Maybe it is easier to see how the following multiplication works
import numpy as np
data = np.array([[1, 2], [3, 4]])
n = 3
eye = np.eye(n)
mult1 = data[:, :, np.newaxis, np.newaxis] * eye[np.newaxis, np.newaxis, :, :]
Now, mult1 is a sort of "matrix of matrices". If I give two indices, I will get the diagonal matrix for the corresponding element in the original one:
print(mult1[0, 0])
# [[1. 0. 0.]
# [0. 1. 0.]
# [0. 0. 1.]]
So you could say this matrix could be visualize like this:
| 1 0 0 | | 2 0 0 |
| 0 1 0 | | 0 2 0 |
| 0 0 1 | | 0 0 2 |
| 3 0 0 | | 4 0 0 |
| 0 3 0 | | 0 4 0 |
| 0 0 3 | | 0 0 4 |
However this is deceiving, because if you try to reshape this to the final shape the result is not the right one:
print(np.reshape(mult1, (n * data.shape[0], n * data.shape[1])))
# [[1. 0. 0. 0. 1. 0.]
# [0. 0. 1. 2. 0. 0.]
# [0. 2. 0. 0. 0. 2.]
# [3. 0. 0. 0. 3. 0.]
# [0. 0. 3. 4. 0. 0.]
# [0. 4. 0. 0. 0. 4.]]
The reason is that reshaping (conceptually) "flattens" the array first and then gives the new shape. But the flattened array in this case is not what you need:
print(mult1.ravel())
# [1. 0. 0. 0. 1. 0. 0. 0. 1. 2. 0. 0. 0. 2. 0. ...
You see, it first traverses the first submatrix, then the second, etc. What you want though is for it to traverse first the first row of the first submatrix, then the first row of the second submatrix, then second row of first submatrix, etc. So basically you want something like:
Take the first two submatrices (the ones with 1 and 2)
Take all the first rows ([1, 0, 0] and [2, 0, 0]).
Take the first of these ([1, 0, 0])
Take each of its elements (1, 0 and 0).
And then continue for the rest. So if you think about it, we traversing first the axis 0 (row of "matrix of matrices"), then 2 (rows of each submatrix), then 1 (column of "matrix of matrices") and finally 3 (columns of submatrices). So we can just reorder the axis to do that:
mult2 = mult1.transpose((0, 2, 1, 3))
print(np.reshape(mult2, (n * data.shape[0], n * data.shape[1])))
# [[1. 0. 0. 2. 0. 0.]
# [0. 1. 0. 0. 2. 0.]
# [0. 0. 1. 0. 0. 2.]
# [3. 0. 0. 4. 0. 0.]
# [0. 3. 0. 0. 4. 0.]
# [0. 0. 3. 0. 0. 4.]]
And it works! So in the solution I posted, to avoid the tranposing, I just make the multiplication so the order of the axes is exactly that:
mult = data[
:, # Matrix-of-matrices rows
np.newaxis, # Submatrix rows
:, # Matrix-of-matrices columns
np.newaxis # Submatrix columns
] * eye[
np.newaxis, # Matrix-of-matrices rows
:, # Submatrix rows
np.newaxis, # Matrix-of-matrices columns
: # Submatrix columns
]
I hope that makes it slightly clearer. To be honest, in this case in particular I could came up with the solution quickly because I had to solve a similar problem not too long ago, and I guess you end up building an intuition of these things.
Another way to achieve the same effect in numpy is to use the following:
A = np.array([[1,2],
[3,4]])
B = np.repeat(np.repeat(A, 3, axis=0), 3, axis=1) * np.tile(np.eye(3), (2,2))
Then, to replicate it in tensorflow, we can use tf.tile, but there is no tf.repeat, however someone has provided this function on tensorflow tracker.
def tf_repeat(tensor, repeats):
"""
Args:
input: A Tensor. 1-D or higher.
repeats: A list. Number of repeat for each dimension, length must be the same as the number of dimensions in input
Returns:
A Tensor. Has the same type as input. Has the shape of tensor.shape * repeats
"""
with tf.variable_scope("repeat"):
expanded_tensor = tf.expand_dims(tensor, -1)
multiples = [1] + list(repeats)
tiled_tensor = tf.tile(expanded_tensor, multiples=multiples)
repeated_tesnor = tf.reshape(tiled_tensor, tf.shape(tensor) * repeats)
return repeated_tesnor
and thus the tensorflow implementation will look like the following. Here I also consider that the first dimension represents batches, and thus we do not operate on it.
N = 2
M = 3
nbatch = 2
Ain = np.reshape(np.arange(1, 1 + N*N*nbatch), (nbatch, N, N))
A = tf.placeholder(tf.float32, shape=(nbatch, N, N))
B = tf.tile(tf.eye(M), [N, N]) * tf_repeat(A, [1, M, M])
with tf.Session() as sess:
print(sess.run(C, feed_dict={A: Ain}))
and the result:
[[[1. 0. 0. 2. 0. 0.]
[0. 1. 0. 0. 2. 0.]
[0. 0. 1. 0. 0. 2.]
[3. 0. 0. 4. 0. 0.]
[0. 3. 0. 0. 4. 0.]
[0. 0. 3. 0. 0. 4.]]
[[5. 0. 0. 6. 0. 0.]
[0. 5. 0. 0. 6. 0.]
[0. 0. 5. 0. 0. 6.]
[7. 0. 0. 8. 0. 0.]
[0. 7. 0. 0. 8. 0.]
[0. 0. 7. 0. 0. 8.]]]

How to scale each column of a matrix

This is how I scale a single vector:
vector = np.array([-4, -3, -2, -1, 0])
# pass the vector, current range of values, the desired range, and it returns the scaled vector
scaledVector = np.interp(vector, (vector.min(), vector.max()), (-1, +1)) # results in [-1. -0.5 0. 0.5 1. ]
How can I apply the above approach to each column of a given matrix?
matrix = np.array(
[[-4, -4, 0, 0, 0],
[-3, -3, 1, -15, 0],
[-2, -2, 8, -1, 0],
[-1, -1, 11, 12, 0],
[0, 0, 50, 69, 80]])
scaledMatrix = [insert code that scales each column of the matrix]
Note that the first two columns of the scaledMatrix should be equal to the scaledVector from the first example. For the matrix above, the correctly computed scaledMatrix is:
[[-1. -1. -1. -0.64285714 -1. ]
[-0.5 -0.5 -0.96 -1. -1. ]
[ 0. 0. -0.68 -0.66666667 -1. ]
[ 0.5 0.5 -0.56 -0.35714286 -1. ]
[ 1. 1. 1. 1. 1. ]]
My current approach (wrong):
np.interp(matrix, (np.min(matrix), np.max(matrix)), (-1, +1))
If you want to do it by hand and understand what's going on:
First substract columnwise mins to make each columns have min 0.
Then divide by columnwise amplitude (max - min) to make each column have max 1.
Now each column is between 0 and 1. If you want it to be between -1 and 1, multiply by 2, and substract 1:
In [3]: mins = np.min(matrix, axis=0)
In [4]: maxs = np.max(matrix, axis=0)
In [5]: (matrix - mins[None, :]) / (maxs[None, :] - mins[None, :])
Out[5]:
array([[ 0. , 0. , 0. , 0.17857143, 0. ],
[ 0.25 , 0.25 , 0.02 , 0. , 0. ],
[ 0.5 , 0.5 , 0.16 , 0.16666667, 0. ],
[ 0.75 , 0.75 , 0.22 , 0.32142857, 0. ],
[ 1. , 1. , 1. , 1. , 1. ]])
In [6]: 2 * _ - 1
Out[6]:
array([[-1. , -1. , -1. , -0.64285714, -1. ],
[-0.5 , -0.5 , -0.96 , -1. , -1. ],
[ 0. , 0. , -0.68 , -0.66666667, -1. ],
[ 0.5 , 0.5 , -0.56 , -0.35714286, -1. ],
[ 1. , 1. , 1. , 1. , 1. ]])
I use [None, :] for numpy to understand that I'm talking about "row vectors", not column ones.
Otherwise, use the wonderful sklearn package, whose preprocessing module has lots of useful transformers:
In [13]: from sklearn.preprocessing import MinMaxScaler
In [14]: scaler = MinMaxScaler(feature_range=(-1, 1))
In [15]: scaler.fit(matrix)
Out[15]: MinMaxScaler(copy=True, feature_range=(-1, 1))
In [16]: scaler.transform(matrix)
Out[16]:
array([[-1. , -1. , -1. , -0.64285714, -1. ],
[-0.5 , -0.5 , -0.96 , -1. , -1. ],
[ 0. , 0. , -0.68 , -0.66666667, -1. ],
[ 0.5 , 0.5 , -0.56 , -0.35714286, -1. ],
[ 1. , 1. , 1. , 1. , 1. ]])

Explanation on Numpy Broadcasting Answer

I recently posted a question here which was answered exactly as I asked. However, I think I overestimated my ability to manipulate the answer further. I read the broadcasting doc, and followed a few links that led me way back to 2002 about numpy broadcasting.
I've used the second method of array creation using broadcasting:
N = 10
out = np.zeros((N**3,4),dtype=int)
out[:,:3] = (np.arange(N**3)[:,None]/[N**2,N,1])%N
which outputs:
[[0,0,0,0]
[0,0,1,0]
...
[0,1,0,0]
[0,1,1,0]
...
[9,9,8,0]
[9,9,9,0]]
but I do not understand via the docs how to manipulate that. I would ideally like to be able to set the increments in which each individual column changes.
ex. Column A changes by 0.5 up to 2, column B changes by 0.2 up to 1, and column C changes by 1 up to 10.
[[0,0,0,0]
[0,0,1,0]
...
[0,0,9,0]
[0,0.2,0,0]
...
[0,0.8,9,0]
[0.5,0,0,0]
...
[1.5,0.8,9,0]]
Thanks for any help.
You can adjust your current code just a little bit to make it work.
>>> out = np.zeros((4*5*10,4))
>>> out[:,:3] = (np.arange(4*5*10)[:,None]//(5*10, 10, 1)*(0.5, 0.2, 1)%(2, 1, 10))
>>> out
array([[ 0. , 0. , 0. , 0. ],
[ 0. , 0. , 1. , 0. ],
[ 0. , 0. , 2. , 0. ],
...
[ 0. , 0. , 8. , 0. ],
[ 0. , 0. , 9. , 0. ],
[ 0. , 0.2, 0. , 0. ],
...
[ 0. , 0.8, 9. , 0. ],
[ 0.5, 0. , 0. , 0. ],
...
[ 1.5, 0.8, 9. , 0. ]])
The changes are:
No int dtype on the array, since we need it to hold floats in some columns. You could specify a float dtype if you want (or even something more complicated that only allows floats in the first two columns).
Rather than N**3 total values, figure out the number of distinct values for each column, and multiply them together to get our total size. This is used for both zeros and arange.
Use the floor division // operator in the first broadcast operation because we want integers at this point, but later we'll want floats.
The values to divide by are again based on the number of values for the later columns (e.g. for A,B,C numbers of values, divide by B*C, C, 1).
Add a new broadcast operation to multiply by various scale factors (how much each value increases at once).
Change the values in the broadcast mod % operation to match the bounds on each column.
This small example helps me understand what is going on:
In [123]: N=2
In [124]: np.arange(N**3)[:,None]/[N**2, N, 1]
Out[124]:
array([[ 0. , 0. , 0. ],
[ 0.25, 0.5 , 1. ],
[ 0.5 , 1. , 2. ],
[ 0.75, 1.5 , 3. ],
[ 1. , 2. , 4. ],
[ 1.25, 2.5 , 5. ],
[ 1.5 , 3. , 6. ],
[ 1.75, 3.5 , 7. ]])
So we generate a range of numbers (0 to 7) and divide them by 4,2, and 1.
The rest of the calculation just changes each value without further broadcasting
Apply %N to each element
In [126]: np.arange(N**3)[:,None]/[N**2, N, 1]%N
Out[126]:
array([[ 0. , 0. , 0. ],
[ 0.25, 0.5 , 1. ],
[ 0.5 , 1. , 0. ],
[ 0.75, 1.5 , 1. ],
[ 1. , 0. , 0. ],
[ 1.25, 0.5 , 1. ],
[ 1.5 , 1. , 0. ],
[ 1.75, 1.5 , 1. ]])
Assigning to an int array is the same as converting the floats to integers:
In [127]: (np.arange(N**3)[:,None]/[N**2, N, 1]%N).astype(int)
Out[127]:
array([[0, 0, 0],
[0, 0, 1],
[0, 1, 0],
[0, 1, 1],
[1, 0, 0],
[1, 0, 1],
[1, 1, 0],
[1, 1, 1]])

Matlab / Octave bwdist() in Python or C

Does anyone know of a Python replacement for Matlab / Octave bwdist() function? This function returns Euclidian distance of each cell to the closest non-zero cell for a given matrix. I saw an Octave C implementation, a pure Matlab implementation, and I was wondering if anyone had to implement this in ANSI C (which doesn't include any Matlab / Octave headers, so I can integrate from Python easily) or in pure Python.
Both links I mentioned are below:
C++
Matlab M-File
As a test, a Matlab code / output looks something like this:
bw= [0 1 0 0 0;
1 0 0 0 0;
0 0 0 0 1;
0 0 0 0 0;
0 0 1 0 0]
D = bwdist(bw)
D =
1.00000 0.00000 1.00000 2.00000 2.00000
0.00000 1.00000 1.41421 1.41421 1.00000
1.00000 1.41421 2.00000 1.00000 0.00000
2.00000 1.41421 1.00000 1.41421 1.00000
2.00000 1.00000 0.00000 1.00000 2.00000
I tested a recommended distance_transform_edt call in Python, which gave this result:
import numpy as np
from scipy import ndimage
a = np.array(([0,1,0,0,0],
[1,0,0,0,0],
[0,0,0,0,1],
[0,0,0,0,0],
[0,0,1,0,0]))
res = ndimage.distance_transform_edt(a)
print res
[[ 0. 1. 0. 0. 0.]
[ 1. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 1.]
[ 0. 0. 0. 0. 0.]
[ 0. 0. 1. 0. 0.]]
This result does not seem to match the Octave / Matlab output.
While Matlab bwdist returns distances to the closest non-zero cell, Python distance_transform_edt returns distances “to the closest background element”. SciPy documentation is not clear about what it considers to be the “background”, there is some type conversion machinery behind it; in practice 0 is the background, non-zero is the foreground.
So if we have matrix a:
>>> a = np.array(([0,1,0,0,0],
[1,0,0,0,0],
[0,0,0,0,1],
[0,0,0,0,0],
[0,0,1,0,0]))
then to calculate the same result we need to replaces ones with zeros and zeros with ones, e.g. consider matrix 1-a:
>>> a
array([[0, 1, 0, 0, 0],
[1, 0, 0, 0, 0],
[0, 0, 0, 0, 1],
[0, 0, 0, 0, 0],
[0, 0, 1, 0, 0]])
>>> 1 - a
array([[1, 0, 1, 1, 1],
[0, 1, 1, 1, 1],
[1, 1, 1, 1, 0],
[1, 1, 1, 1, 1],
[1, 1, 0, 1, 1]])
In this case scipy.ndimage.morphology.distance_transform_edt gives the expected results:
>>> distance_transform_edt(1-a)
array([[ 1. , 0. , 1. , 2. , 2. ],
[ 0. , 1. , 1.41421356, 1.41421356, 1. ],
[ 1. , 1.41421356, 2. , 1. , 0. ],
[ 2. , 1.41421356, 1. , 1.41421356, 1. ],
[ 2. , 1. , 0. , 1. , 2. ]])
Does scipy.ndimage.morphology.distance_transform_edt meet your needs?
No need to do the 1-a
>>> distance_transform_edt(a==0)
array([[ 1. , 0. , 1. , 2. , 2. ],
[ 0. , 1. , 1.41421356, 1.41421356, 1. ],
[ 1. , 1.41421356, 2. , 1. , 0. ],
[ 2. , 1.41421356, 1. , 1.41421356, 1. ],
[ 2. , 1. , 0. , 1. , 2. ]])
I think you can use distanceTransform() from OpenCV that Calculates the distance to the closest zero pixel for each pixel of the source image.
Check this link: https://docs.opencv.org/3.4/d7/d1b/group__imgproc__misc.html#ga8a0b7fdfcb7a13dde018988ba3a43042

Categories

Resources