Scikit-image and central moments: what is the meaning? - python

Looking for examples of how to use image processing tools to "describe" images and shapes of any sort, I have stumbled upon the Scikit-image skimage.measure.moments_central(image, cr, cc, order=3) function.
They give an example of how to use this function:
from skimage import measure #Package name in Enthought Canopy
import numpy as np
image = np.zeros((20, 20), dtype=np.double) #Square image of zeros
image[13:17, 13:17] = 1 #Adding a square of 1s
m = moments(image)
cr = m[0, 1] / m[0, 0] #Row of the centroid (x coordinate)
cc = m[1, 0] / m[0, 0] #Column of the centroid (y coordinate)
In[1]: moments_central(image, cr, cc)
Out[1]:
array([[ 16., 0., 20., 0.],
[ 0., 0., 0., 0.],
[ 20., 0., 25., 0.],
[ 0., 0., 0., 0.]])
1) What do each of the values represent? Since the (0,0) element is 16, I get this number corresponds to the area of the square of 1s, and therefore it is mu zero-zero. But how about the others?
2) Is this always a symmetric matrix?
3) What are the values associated with the famous second central moments?

The array returned by measure.moments_central correspond to the formula of https://en.wikipedia.org/wiki/Image_moment (section central moment). mu_00 corresponds indeed to the area of the object.
The inertia matrix is not always symmetric, as shown by this example where the object is a rectangle instead of a square.
>>> image = np.zeros((20, 20), dtype=np.double) #Square image of zeros
>>> image[14:16, 13:17] = 1
>>> m = measure.moments(image)
>>> cr = m[0, 1] / m[0, 0]
>>> cc = m[1, 0] / m[0, 0]
>>> measure.moments_central(image, cr, cc)
array([[ 8. , 0. , 2. , 0. ],
[ 0. , 0. , 0. , 0. ],
[ 10. , 0. , 2.5, 0. ],
[ 0. , 0. , 0. , 0. ]])
As for second-order moments, they are mu_02, mu_11, and mu_20 (coefficients on the diagonal i + j = 1). The same Wikipedia page https://en.wikipedia.org/wiki/Image_moment explains how to use second-order moments for computing the orientation of objects.

Related

Convert binomial tree to paths

I have a binomial tree stored as an upper triangular matrix:
array([[400., 500., 625.],
[ 0., 320., 400.],
[ 0., 0., 256.]])
and I am trying to convert it to a matrix with all possible paths, like:
array([[400., 500., 625.],
[400., 500., 400.],
[400., 320., 400.],
[400., 320., 256.]])
I've written a snippet that does the job when there are only 2 steps:
def unstack_tree(tree):
output_map = []
for i in range(tree.shape[0] - 1):
for j in range(tree.shape[1] - 1):
output_map.append([tree[0,0], tree[i, 1], tree[i+j, 2]])
return np.array(output_map)
But I am struggling with how to generilize it to N steps to handle, say 3 step tree:
array([[400. , 500. , 625. , 781.25],
[ 0. , 320. , 400. , 500. ],
[ 0. , 0. , 256. , 320. ],
[ 0. , 0. , 0. , 204.8 ]])
I think I need more loops but cannot formulate it
Each path can be represented by binary code: first (0, 0), second (0, 1), third
(1, 0) ... . But actual index of array will be represented by cumsum of binary
representation.
import numpy as np
from itertools import product
n = 2
b = np.array([[400., 500., 625.],
[ 0., 320., 400.],
[ 0., 0., 256.]])
a = np.array(list(product((0, 1), repeat=n)))
a = np.c_[[0] * 2 ** n, a]
print(a)
# [[0 0 0]
# [0 0 1]
# [0 1 0]
# [0 1 1]]
a = a.cumsum(axis=1)
print(a)
# [[0 0 0]
# [0 0 1]
# [0 1 1]
# [0 1 2]]
print(np.choose(a, b))
# [[400. 500. 625.]
# [400. 500. 400.]
# [400. 320. 400.]
# [400. 320. 256.]]

Is there a short way in networkx(Python) to calculate the reachability matrix?

Imagine I have given a directed graph and I want a numpy reachability matrix whether a path exists, so R(i,j)=1 if and only if there is a path from i to j;
networkx has the function has_path(G, source, target), however it is only for specific source and taget nodes; Therefore, I've so far been doing this:
import networkx as nx
R=np.zeros((d,d))
for i in range(d):
for j in range(d):
if nx.has_path(G, i, j):
R[i,j]=1
Is there a nicer way to achieve this?
Here would be a minimum example with real numbers:
import networkx as nx
import numpy as np
c=np.random.rand(4,4)
G=nx.DiGraph(c)
A=nx.minimum_spanning_arborescence(G)
adj=nx.to_numpy_matrix(A)
Here we can see that this would be the adjacency but not reachability matrix - with my number example I would get
adj=
matrix([[0. , 0. , 0. , 0. ],
[0. , 0. , 0.47971056, 0. ],
[0. , 0. , 0. , 0. ],
[0.16101491, 0.04779295, 0. , 0. ]])
So there is a path from 4 to 2 (adj(4,2)>0) and from 2 to 3 (adj(2,3)>0) so there also would be a path from 4 to 3 but adj(4,3)=0
You could use all_pairs_shortest_path_length:
import networkx as nx
import numpy as np
np.random.seed(42)
c = np.random.rand(4, 4)
G = nx.DiGraph(c)
length = dict(nx.all_pairs_shortest_path_length(G))
R = np.array([[length.get(m, {}).get(n, 0) > 0 for m in G.nodes] for n in G.nodes], dtype=np.int32)
print(R)
Output
[[1 1 1 1 1]
[0 1 1 1 1]
[0 0 1 1 1]
[0 0 0 1 1]
[0 0 0 0 1]]
One approach could be to find all descendants of each node, and set the corresponding rows that are reachable to 1:
a = np.zeros((len(A.nodes()),)*2)
for node in A.nodes():
s = list(nx.descendants(A, node))
a[s, node] = 1
print(a)
array([[0., 0., 1., 0.],
[1., 0., 1., 0.],
[0., 0., 0., 0.],
[1., 1., 1., 0.]])

How to select tensor element by other index tensor in tensorflow [duplicate]

I'm having trouble understanding a basic concept with tensorflow. How does indexing work for tensor read/write operations? In order to make this specific, how can the following numpy examples be translated to tensorflow (using tensors for the arrays, indices and values being assigned):
x = np.zeros((3, 4))
row_indices = np.array([1, 1, 2])
col_indices = np.array([0, 2, 3])
x[row_indices, col_indices] = 2
x
with output:
array([[ 0., 0., 0., 0.],
[ 2., 0., 2., 0.],
[ 0., 0., 0., 2.]])
... and ...
x[row_indices, col_indices] = np.array([5, 4, 3])
x
with output:
array([[ 0., 0., 0., 0.],
[ 5., 0., 4., 0.],
[ 0., 0., 0., 3.]])
... and finally ...
y = x[row_indices, col_indices]
y
with output:
array([ 5., 4., 3.])
There's github issue #206 to support this nicely, meanwhile you have to resort to verbose work-arounds
The first example can be done with tf.select that combines two same-shaped tensors by selecting each element from one or the other
tf.reset_default_graph()
row_indices = tf.constant([1, 1, 2])
col_indices = tf.constant([0, 2, 3])
x = tf.zeros((3, 4))
sess = tf.InteractiveSession()
# get list of ((row1, col1), (row2, col2), ..)
coords = tf.transpose(tf.pack([row_indices, col_indices]))
# get tensor with 1's at positions (row1, col1),...
binary_mask = tf.sparse_to_dense(coords, x.get_shape(), 1)
# convert 1/0 to True/False
binary_mask = tf.cast(binary_mask, tf.bool)
twos = 2*tf.ones(x.get_shape())
# make new x out of old values or 2, depending on mask
x = tf.select(binary_mask, twos, x)
print x.eval()
gives
[[ 0. 0. 0. 0.]
[ 2. 0. 2. 0.]
[ 0. 0. 0. 2.]]
The second one could be done with scatter_update, except scatter_update only supports on linear indices and works on variables. So you could create a temporary variable and use reshaping like this. (to avoid variables you could use dynamic_stitch, see the end)
# get linear indices
linear_indices = row_indices*x.get_shape()[1]+col_indices
# turn 'x' into 1d variable since "scatter_update" supports linear indexing only
x_flat = tf.Variable(tf.reshape(x, [-1]))
# no automatic promotion, so make updates float32 to match x
updates = tf.constant([5, 4, 3], dtype=tf.float32)
sess.run(tf.initialize_all_variables())
sess.run(tf.scatter_update(x_flat, linear_indices, updates))
# convert back into original shape
x = tf.reshape(x_flat, x.get_shape())
print x.eval()
gives
[[ 0. 0. 0. 0.]
[ 5. 0. 4. 0.]
[ 0. 0. 0. 3.]]
Finally the third example is already supported with gather_nd, you write
print tf.gather_nd(x, coords).eval()
To get
[ 5. 4. 3.]
Edit, May 6
The update x[cols,rows]=newvals can be done without using Variables (which occupy memory between session run calls) by using select with sparse_to_dense that takes vector of sparse values, or relying on dynamic_stitch
sess = tf.InteractiveSession()
x = tf.zeros((3, 4))
row_indices = tf.constant([1, 1, 2])
col_indices = tf.constant([0, 2, 3])
# no automatic promotion, so specify float type
replacement_vals = tf.constant([5, 4, 3], dtype=tf.float32)
# convert to linear indexing in row-major form
linear_indices = row_indices*x.get_shape()[1]+col_indices
x_flat = tf.reshape(x, [-1])
# use dynamic stitch, it merges the array by taking value either
# from array1[index1] or array2[index2], if indices conflict,
# the later one is used
unchanged_indices = tf.range(tf.size(x_flat))
changed_indices = linear_indices
x_flat = tf.dynamic_stitch([unchanged_indices, changed_indices],
[x_flat, replacement_vals])
x = tf.reshape(x_flat, x.get_shape())
print x.eval()

SciPy minimize with gradient

This is an implementation of logistic regression, using a toy data set. Some feedback from #dermen helped me fix a basic problem with how I was using scipy.optimize.minimize but even after fixing that issue, optimize fails to converge, even just using the first five rows of the test data set. Here is a stand-along version of the code:
import numpy as np
from scipy.optimize import minimize
# `data` is a subset of a toy dataset. The full dataset is ~100 rows, linearly seperable and located at
# https://github.com/liavkoren/ng-redux/blob/master/ex2/ex2data1.txt
data = np.array([
[ 34.62365962, 78.02469282],
[ 30.28671077, 43.89499752],
[ 35.84740877, 72.90219803],
[ 60.18259939, 86.3085521 ],
[ 79.03273605, 75.34437644],
[ 45.08327748, 56.31637178],
[ 61.10666454, 96.51142588],
[ 75.02474557, 46.55401354],
[ 76.0987867, 87.42056972],
[ 84.43281996, 43.53339331],
])
# Ground truth
y = np.array([0., 0., 0., 1., 1., 0., 1., 1., 1., 1.])
def sigmoid(z):
return 1/(1 + np.power(np.e, -z))
h = lambda theta, x: sigmoid(x.dot(theta))
def cost(theta, X, y):
m = X.shape[0]
j = y.dot(np.log(h(theta, X))) + (1 - y).dot(np.log(1 - h(theta, X)))
return (-j/m)
def grad(theta, X, y):
m = X.shape[0]
return ((h(theta, X) - y).dot(X))/m
# Add a column of ones:
m, features = np.shape(X_initial)
features += 1
X = np.concatenate([np.ones((m, 1)), X_initial], axis=1)
initial_theta = np.zeros((features))
def check_functions(grad_func, cost_func):
'''
Asserts that the cost and gradient functions return known corret values for a given theta, X, y.
Test case from https://www.coursera.org/learn/machine-learning/discussions/weeks/3/threads/tA3ESpq0EeW70BJZtLVfGQ
The expected cost is 4.6832.
The expected gradient = [0.31722, 0.87232, 1.64812, 2.23787]
'''
test_X = np.array([[1, 8, 1, 6], [1, 3, 5, 7], [1, 4, 9, 2]]) # X
test_y = np.array([[1, 0, 1]]) # y
test_theta = np.array([-2, -1, 1, 2])
grad_diff = grad_func(test_theta, test_X, test_y) - np.array([0.31722, 0.87232, 1.64812, 2.23787])
assert grad_diff.dot(grad_diff.T) < 0.0001
assert abs(cost_func(test_theta, test_X, test_y, debug=False) - 4.6832) < 0.0001
check_functions(grad, cost)
# `cutoff` slices out a subset of rows.
cutoff = 2
print minimize(fun=cost, x0=initial_theta, args=(X[0:cutoff, :], y[0:cutoff]), jac=grad)
This code fails with:
fun: nan
hess_inv: array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
jac: array([ 0., 0., 0.])
message: 'Desired error not necessarily achieved due to precision loss.'
nfev: 32
nit: 1
njev: 32
status: 2
success: False
x: array([ -0.5 , -16.2275926 , -30.47992258])
/Users/liavkoren/Envs/data-sci/lib/python2.7/site-packages/ipykernel/__main__.py:25: RuntimeWarning: overflow encountered in power
/Users/liavkoren/Envs/data-sci/lib/python2.7/site-packages/ipykernel/__main__.py:38: RuntimeWarning: divide by zero encountered in log
/Users/liavkoren/Envs/data-sci/lib/python2.7/site-packages/ipykernel/__main__.py:42: RuntimeWarning: divide by zero encountered in log
There was overflow occurring in the calls to np.power inside the sigma function. I added debugging messages into the cost function and saw the following:
theta: [ 0. 0. 0.]
--
X: [[ 1. 34.62365962 78.02469282]
[ 1. 30.28671077 43.89499752]]
--
y=1: [ 0.5 0.5] y=0: [ 0.5 0.5]
log probabilities:
y=1: [-0.69314718 -0.69314718]
y=0: [-0.69314718 -0.69314718]
=======
theta: [ -0.5 -16.2275926 -30.47992258]
--
X: [[ 1. 34.62365962 78.02469282]
[ 1. 30.28671077 43.89499752]]
--
y=1: [ 0. 0.] y=0: [ 1. 1.]
log probabilities:
y=1: [-inf -inf]
y=0: [ 0. 0.]
This overflows on the second iteration!!
I quickly confirmed that this does seem to be to problem by dividing the dataset by 1/10 and it converged. I guess I will have to look at feature scaling/normalization or some other strategies for avoiding overflow.

Scipy Sparse Matrix special substraction

I'm doing a project and I'm doing a lot of matrix computation in it.
I'm looking for a smart way to speed up my code. In my project, I'm dealing with a sparse matrix of size 100Mx1M with around 10M non-zeros values. The example below is just to see my point.
Let's say I have:
A vector v of size (2)
A vector c of size (3)
A sparse matrix X of size (2,3)
v = np.asarray([10, 20])
c = np.asarray([ 2, 3, 4])
data = np.array([1, 1, 1, 1])
row = np.array([0, 0, 1, 1])
col = np.array([1, 2, 0, 2])
X = coo_matrix((data,(row,col)), shape=(2,3))
X.todense()
# matrix([[0, 1, 1],
# [1, 0, 1]])
Currently I'm doing:
result = np.zeros_like(v)
d = scipy.sparse.lil_matrix((v.shape[0], v.shape[0]))
d.setdiag(v)
tmp = d * X
print tmp.todense()
#matrix([[ 0., 10., 10.],
# [ 20., 0., 20.]])
# At this point tmp is csr sparse matrix
for i in range(tmp.shape[0]):
x_i = tmp.getrow(i)
result += x_i.data * ( c[x_i.indices] - x_i.data)
# I only want to do the subtraction on non-zero elements
print result
# array([-430, -380])
And my problem is the for loop and especially the subtraction.
I would like to find a way to vectorize this operation by subtracting only on the non-zero elements.
Something to get directly the sparse matrix on the subtraction:
matrix([[ 0., -7., -6.],
[ -18., 0., -16.]])
Is there a way to do this smartly ?
You don't need to loop over the rows to do what you are already doing. And you can use a similar trick to perform the multiplication of the rows by the first vector:
import scipy.sparse as sps
# number of nonzero entries per row of X
nnz_per_row = np.diff(X.indptr)
# multiply every row by the corresponding entry of v
# You could do this in-place as:
# X.data *= np.repeat(v, nnz_per_row)
Y = sps.csr_matrix((X.data * np.repeat(v, nnz_per_row), X.indices, X.indptr),
shape=X.shape)
# subtract from the non-zero entries the corresponding column value in c...
Y.data -= np.take(c, Y.indices)
# ...and multiply by -1 to get the value you are after
Y.data *= -1
To see that it works, set up some dummy data
rows, cols = 3, 5
v = np.random.rand(rows)
c = np.random.rand(cols)
X = sps.rand(rows, cols, density=0.5, format='csr')
and after run the code above:
>>> x = X.toarray()
>>> mask = x == 0
>>> x *= v[:, np.newaxis]
>>> x = c - x
>>> x[mask] = 0
>>> x
array([[ 0.79935123, 0. , 0. , -0.0097763 , 0.59901243],
[ 0.7522559 , 0. , 0.67510109, 0. , 0.36240006],
[ 0. , 0. , 0.72370725, 0. , 0. ]])
>>> Y.toarray()
array([[ 0.79935123, 0. , 0. , -0.0097763 , 0.59901243],
[ 0.7522559 , 0. , 0.67510109, 0. , 0.36240006],
[ 0. , 0. , 0.72370725, 0. , 0. ]])
The way you are accumulating your result requires that there are the same number of non-zero entries in every row, which seems a pretty weird thing to do. Are you sure that is what you are after? If that's really what you want you could get that value with something like:
result = np.sum(Y.data.reshape(Y.shape[0], -1), axis=0)
but I have trouble believing that is really what you are after...

Categories

Resources