I want to write a function to calculate the Euclidean distance between coordinates in list_a to each of the coordinates in list_b, and produce an array of distances of dimension a rows by b columns (where a is the number of coordinates in list_a and b is the number of coordinates in list_b.
NB: I do not want to use any libraries other than numpy, for simplicity.
list_a = np.array([[0,1], [2,2], [5,4], [3,6], [4,2]])
list_b = np.array([[0,1],[5,4]])
Running the function would generate:
>>> np.array([[0., 5.830951894845301],
[2.236, 3.605551275463989],
[5.830951894845301, 0.],
[5.830951894845301, 2.8284271247461903],
[4.123105625617661, 2.23606797749979]])
I have been trying to run the below
def run_euc(list_a,list_b):
euc_1 = [np.subtract(list_a, list_b)]
euc_2 = sum(sum([i**2 for i in euc_1]))
return np.sqrt(euc_2)
But I am getting the following error:
ValueError: operands could not be broadcast together with shapes (5,2) (2,2)
Thank you.
Here, you can just use np.linalg.norm to compute the Euclidean distance. Your bug is due to np.subtract is expecting the two inputs are of the same length.
import numpy as np
list_a = np.array([[0,1], [2,2], [5,4], [3,6], [4,2]])
list_b = np.array([[0,1],[5,4]])
def run_euc(list_a,list_b):
return np.array([[ np.linalg.norm(i-j) for j in list_b] for i in list_a])
print(run_euc(list_a, list_b))
The code produces:
[[0. 5.83095189]
[2.23606798 3.60555128]
[5.83095189 0. ]
[5.83095189 2.82842712]
[4.12310563 2.23606798]]
I wonder what is stopping you from using Scipy. Since you are anyway using numpy, perhaps you can try using Scipy, which is not so heavy.
Why?
It has many mathematical functions with efficient implementations to make good use of your computing power.
With that in mind, here is a distance_matrix function exactly for the purpose you've mentioned.
Concretely, it takes your list_a (m x k matrix) and list_b (n x k matrix) and outputs m x n matrix with p-norm (p=2 for euclidean) distance between each pair of points across the two matrices.
from scipy.spatial import distance_matrix
distances = distance_matrix(list_a, list_b)
I think this works
import numpy as np
def distance(x,y):
x=np.array(x)
y=np.array(y)
p=np.sum((x-y)**2)
d=np.sqrt(p)
return d
Another way you can do this is:
np.array(
[np.sqrt((list_a[:,1]-list_b[i,1])**2+(list_a[:,0]-list_b[i,0])**2) for i in range(len(list_b))]
).T
Output:
array([[0. , 5.83095189],
[2.23606798, 3.60555128],
[5.83095189, 0. ],
[5.83095189, 2.82842712],
[4.12310563, 2.23606798]])
This code can be written in much more simpler and efficient way,so if you find anything that could be improved in the code,please let me know in the comment.
I hope this answers the question but this is a repeat of;
Minimum Euclidean distance between points in two different Numpy arrays, not within
# Import package
import numpy as np
# Define unequal matrices
xy1 = np.array([[0,1], [2,2], [5,4], [3,6], [4,2]])
xy2 = np.array([[0,1],[5,4]])
P = np.add.outer(np.sum(xy1**2, axis=1), np.sum(xy2**2, axis=1))
N = np.dot(xy1, xy2.T)
dists = np.sqrt(P - 2*N)
print(dists)
Using scipy, you could compute distance between each pair as follows:
import numpy as np
from scipy.spatial import distance
list_a = np.array([[0,1], [2,2], [5,4], [3,6], [4,2]])
list_b = np.array([[0,1],[5,4]])
dist = distance.cdist(list_a, list_b, 'euclidean')
print(dist)
Result:
array([[0. , 5.83095189],
[2.23606798, 3.60555128],
[5.83095189, 0. ],
[5.83095189, 2.82842712],
[4.12310563, 2.23606798]])
Related
I was trying to implement the matrix exponential function as in scipy.linalg.expm. I gained inspiration from kaityo256's github repository. I thus wrote down the following.
from scipy.linalg import expm
from scipy.linalg import eigh
from scipy.linalg import inv
from math import exp as math_exp
from numpy import array, zeros
from numpy.random import random_sample
from numpy.testing import assert_allclose
def diag2sqr(x):
'''Makes an square matrix from a diagonal one.
Takes a 1d matrix. Determines its data type.
Finds out the shape of the 1d matrix.
Makes an empty square matrix with both
dimensions equal to largest (nonzero) dimension of
the 1d matrix. It then fills the elements of the
1d matrix into diagonal slots of the empty
square one.
Parameters
----------
x : ndarray
ndarray of be coverted to a square ndarray
Returns
-------
xsqr : ndarray
ndarray with diagonals sameas those of x
all other elements are zero
dtype same as that of x
'''
x_flat = x.ravel()
xsqr = zeros((x_flat.shape[0], x_flat.shape[0]), dtype=x.dtype)
# Making the empty matrix
for i in range(x_flat.shape[0]):
xsqr[i, i] = x_flat[i]
# filling up the ith element
print('xsqr', xsqr)
return xsqr
def kaityo_expm(x, ):
'''Exponentiates an ndarray (kaityo).
Exponentiates a ndarray in the most naive way.
Parameters
----------
x : ndarray
The ndarray to be exponentiated
Returns
-------
kexpm : ndarray
x after exponentiating
'''
rx, ux = eigh(x)
# Find eigenvalues and eigenvectors
# eigenvectors composed to form a unitary
ux_inv = inv(ux)
# Inverse of the unitary
# tx = diag([array([math_exp(i) for i in rx]).ravel()])
# tx = array([math_exp(i) for i in rx])
tx = diag2sqr(array([math_exp(i) for i in rx]))
# Constructing the diagonal matrix
kexpm1 = tx#ux_inv
kexpm = ux#kexpm1
return kexpm
Afterwards, I tried to test the above code versus scipy.linalg.expm.
x = random_sample((10, 10))
assert_allclose(expm(x), kaityo_expm(x))
This leads to the following output.
AssertionError:
Not equal to tolerance rtol=1e-07, atol=0
Mismatch: 100%
Max absolute difference: 7.04655733
Max relative difference: 0.59875635
x: array([[18.032424, 16.224408, 12.432163, 16.614248, 12.85653 , 13.705387,
15.096966, 10.577946, 18.399573, 17.938062],
[16.352809, 17.525898, 12.79079 , 16.295562, 13.512996, 14.407979,...
y: array([[18.649103, 13.157682, 11.264763, 16.099163, 15.2293 , 17.854499,
11.691586, 13.412066, 15.023189, 15.598455],
[13.157682, 13.612502, 9.628261, 12.659313, 13.559437, 13.382417,..
Obviously, both the implementations differ.
The questions are as follows:
Is it acceptable for them to differ?
Is my implementation wrong?
If my implementation is wrong, how do I fix it?
If my implementation is correct, when is it safe to use scipy.linalg.expm?
I have seen the following questions:
Matrix exponentiation with scipy: expm, expm2 and expm3
from a mathematical approach the definition of exponential of a matrix is made using the Taylor series of the exponential, so:
let A be a diagonal square matrix:
the problem arise when A is a generic square matrix, so before doing the exponential you will need do diagonalize it using eigenvalue and eigenvectors:
with U the matrix of eigenvectors and Lambda the matrix with the eigenvalues on the diagonal.
at this point we are close to finding what is an exponential of a matrix:
now lets implement this result in a simple script:
>>> import numpy as np
>>> import scipy.linalg as ln
>>> A = [[2/3, -4/3, 2],
[5/6, 4/3, -2],
[5/6, -2/3, 0]]
>>> A = np.matrix(A)
>>> print(A)
[[ 0.66666667 -1.33333333 2. ]
[ 0.83333333 1.33333333 -2. ]
[ 0.83333333 -0.66666667 0. ]]
>>> eigvalue, eigvectors = np.linalg.eig(A)
>>> print("eigvalue: ", eigvalue)
>>> print("eigvectors:")
>>> print(eigvectors)
eigvalue: [ 1. -1. 2.]
eigvectors:
[[ 0.81649658 0.27216553 0.87287156]
[ 0.40824829 -0.68041382 -0.21821789]
[ 0.40824829 -0.68041382 0.43643578]]
>>> e_Lambda = np.eye(np.size(A, 0))*(np.exp(eigvalue))
>>> print(e_Lambda)
[[2.71828183 0. 0. ]
[0. 0.36787944 0. ]
[0. 0. 7.3890561 ]]
>>> e_A = eigvectors*e_Lambda*eigvectors.I
>>> print(e_A)
[[ 2.3265481 -6.22769903 7.01116649]
[ 0.97933433 4.27520659 -3.51559341]
[ 0.97933433 -3.11384951 3.87346269]]
>>> e_A2 = ln.expm(A)
>>> print(e_A2)
[[ 2.3265481 -6.22769903 7.01116649]
[ 0.97933433 4.27520659 -3.51559341]
[ 0.97933433 -3.11384951 3.87346269]]
>>> np.testing.assert_allclose(e_A, e_A2)
>>> print(e_A - e_A2)
[[-1.77635684e-15 1.77635684e-15 -8.88178420e-16]
[ 4.44089210e-16 -1.77635684e-15 8.88178420e-16]
[-2.22044605e-16 0.00000000e+00 4.44089210e-16]]
we see that the result is basically the same, so i think it's safe to use scipy.linalg.expm for matrix exponentiation.
i created a repo with the notebook for further testing.
This is the Python Code:
import numpy as np
def find_nearest_vector(array, value):
idx = np.array([np.linalg.norm(x+y) for (x,y) in array-value]).argmin()
return array[idx]
A = np.random.random((10,2))*100
""" A = array([[ 34.19762933, 43.14534123],
[ 48.79558706, 47.79243283],
[ 38.42774411, 84.87155478],
[ 63.64371943, 50.7722317 ],
[ 73.56362857, 27.87895698],
[ 96.67790593, 77.76150486],
[ 68.86202147, 21.38735169],
[ 5.21796467, 59.17051276],
[ 82.92389467, 99.90387851],
[ 6.76626539, 30.50661753]])"""
pt = [6, 30]
print find_nearest_vector(A,pt)
#array([ 6.76626539, 30.50661753])
Can somebody explain me the step-by-step process of getting the nearest vector? The whole process of function "find_nearest_vector()". Can someone show me the tracing process of this function? Thank you.
From Wikipedia; the L2 (Euclidean) norm is defined as
np.linalg.norm simply implements this formula in numpy, but only works for two points at a time. Additionally, it appears your implementation is incorrect, as #unutbu pointed out, it only happens to work by chance in some cases.
If you want to vectorize this, I'd recommend implementing the L2 norm yourself with vectorised numpy.
This works when pt is a 1D array:
>>> pt = np.array(pt)
>>> A[((A - pt[ None, :]) ** 2).sum(1).argmin()]
array([ 6.76626539, 30.50661753])
Note, the closest point will have the smallest L2 norm as well as the smallest squared L2 norm, so this is, in a sense, even more efficient than np.linalg.norm which additionally computes the square root.
In python, is there a vectorized efficient way to calculate the cosine distance of a sparse array u to a sparse matrix v, resulting in an array of elements [1, 2, ..., n] corresponding to cosine(u,v[0]), cosine(u,v[1]), ..., cosine(u, v[n])?
Not natively. You can however use the library scipy that can compute the cosine distance between two vectors for you: http://docs.scipy.org/doc/scipy-0.17.0/reference/generated/scipy.spatial.distance.cosine.html. You can build a version that takes a matrix using this as a stepping stone.
Add the vector onto the end of the matrix, calculate a pairwise distance matrix using sklearn.metrics.pairwise_distances() and then extract the relevant column/row.
So for vector v (with shape (D,)) and matrix m (with shape (N,D)) do:
import sklearn
from sklearn.metrics import pairwise_distances
new_m = np.concatenate([m,v[None,:]], axis=0)
distance_matrix = sklearn.metrics.pairwise_distances(new_m, axis=0), metric="cosine")
distances = distance_matrix[-1,:-1]
Not ideal, but better than iterating!
This method can be extended if you are querying more than one vector. To do this, a list of vectors can be concatenated instead.
I think there is a way using the definition and the numpy library:
Definition:
import numpy as np
#just creating random data
u = np.random.random(100)
v = np.random.random((100,100))
#dot product: for every row in v, multiply u and sum the elements
u_dot_v = np.sum(u*v,axis = 1)
#find the norm of u and each row of v
mod_u = np.sqrt(np.sum(u*u))
mod_v = np.sqrt(np.sum(v*v,axis = 1))
#just apply the definition
final = 1 - u_dot_v/(mod_u*mod_v)
#verify with the cosine function from scipy
from scipy.spatial.distance import cosine
final2 = np.array([cosine(u,i) for i in v])
The definition of cosine distance i found here :https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cosine.html#scipy.spatial.distance.cosine
In scipy.spatial.distance.cosine()
http://docs.scipy.org/doc/scipy-0.17.0/reference/generated/scipy.spatial.distance.cosine.html
Below worked for me, have to provide correct signature
from scipy.spatial.distance import cosine
def cosine_distances(embedding_matrix, extracted_embedding):
return cosine(embedding_matrix, extracted_embedding)
cosine_distances = np.vectorize(cosine_distances, signature='(m),(d)->()')
cosine_distances(corpus_embeddings, extracted_embedding)
In my case
corpus_embeddings is a (10000,128) matrix
extracted_embedding is a 128-dimensional vector
I have a large matrix A of shape (n, n, 3, 3) with n is about 5000. Now I want find the inverse and transpose of matrix A:
import numpy as np
A = np.random.rand(1000, 1000, 3, 3)
identity = np.identity(3, dtype=A.dtype)
Ainv = np.zeros_like(A)
Atrans = np.zeros_like(A)
for i in range(1000):
for j in range(1000):
Ainv[i, j] = np.linalg.solve(A[i, j], identity)
Atrans[i, j] = np.transpose(A[i, j])
Is there a faster, more efficient way to do this?
This is taken from a project of mine, where I also do vectorized linear algebra on many 3x3 matrices.
Note that there is only a loop over 3; not a loop over n, so the code is vectorized in the important dimensions. I don't want to vouch for how this compares to a C/numba extension to do the same thing though, performance wise. This is likely to be substantially faster still, but at least this blows the loops over n out of the water.
def adjoint(A):
"""compute inverse without division by det; ...xv3xc3 input, or array of matrices assumed"""
AI = np.empty_like(A)
for i in xrange(3):
AI[...,i,:] = np.cross(A[...,i-2,:], A[...,i-1,:])
return AI
def inverse_transpose(A):
"""
efficiently compute the inverse-transpose for stack of 3x3 matrices
"""
I = adjoint(A)
det = dot(I, A).mean(axis=-1)
return I / det[...,None,None]
def inverse(A):
"""inverse of a stack of 3x3 matrices"""
return np.swapaxes( inverse_transpose(A), -1,-2)
def dot(A, B):
"""dot arrays of vecs; contract over last indices"""
return np.einsum('...i,...i->...', A, B)
A = np.random.rand(2,2,3,3)
I = inverse(A)
print np.einsum('...ij,...jk',A,I)
for the transpose:
testing a bit in ipython showed:
In [1]: import numpy
In [2]: x = numpy.ones((5,6,3,4))
In [3]: numpy.transpose(x,(0,1,3,2)).shape
Out[3]: (5, 6, 4, 3)
so you can just do
Atrans = numpy.transpose(A,(0,1,3,2))
to transpose the second and third dimensions (while leaving dimension 0 and 1 the same)
for the inversion:
the last example of http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.inv.html#numpy.linalg.inv
Inverses of several matrices can be computed at once:
from numpy.linalg import inv
a = np.array([[[1., 2.], [3., 4.]], [[1, 3], [3, 5]]])
>>> inv(a)
array([[[-2. , 1. ],
[ 1.5, -0.5]],
[[-5. , 2. ],
[ 3. , -1. ]]])
So i guess in your case, the inversion can be done with just
Ainv = inv(A)
and it will know that the last two dimensions are the ones it is supposed to invert over, and that the first dimensions are just how you stacked your data. This should be much faster
speed difference
for the transpose: your method needs 3.77557015419 sec, and mine needs 2.86102294922e-06 sec (which is a speedup of over 1 million times)
for the inversion: i guess my numpy version is not high enough to try that numpy.linalg.inv trick with (n,n,3,3) shape, to see the speedup there (my version is 1.6.2, and the docs i based my solution on are for 1.8, but it should work on 1.8, if someone else can test that?)
Numpy has the array.T properties which is a shortcut for transpose.
For inversions, you use np.linalg.inv(A).
As posted by wim A.I also works on matrix. e.g.
print (A.I)
for numpy-matrix object, use matrix.getI.
e.g.
A=numpy.matrix('1 3;5 6')
print (A.getI())
I have been searching for an answer to this question but cannot find anything useful.
I am working with the python scientific computing stack (scipy,numpy,matplotlib) and I have a set of 2 dimensional points, for which I compute the Delaunay traingulation (wiki) using scipy.spatial.Delaunay.
I need to write a function that, given any point a, will return all other points which are vertices of any simplex (i.e. triangle) that a is also a vertex of (the neighbors of a in the triangulation). However, the documentation for scipy.spatial.Delaunay (here) is pretty bad, and I can't for the life of me understand how the simplices are being specified or I would go about doing this. Even just an explanation of how the neighbors, vertices and vertex_to_simplex arrays in the Delaunay output are organized would be enough to get me going.
Much thanks for any help.
I figured it out on my own, so here's an explanation for anyone future person who is confused by this.
As an example, let's use the simple lattice of points that I was working with in my code, which I generate as follows
import numpy as np
import itertools as it
from matplotlib import pyplot as plt
import scipy as sp
inputs = list(it.product([0,1,2],[0,1,2]))
i = 0
lattice = range(0,len(inputs))
for pair in inputs:
lattice[i] = mksite(pair[0], pair[1])
i = i +1
Details here not really important, suffice to say it generates a regular triangular lattice in which the distance between a point and any of its six nearest neighbors is 1.
To plot it
plt.plot(*np.transpose(lattice), marker = 'o', ls = '')
axes().set_aspect('equal')
Now compute the triangulation:
dela = sp.spatial.Delaunay
triang = dela(lattice)
Let's look at what this gives us.
triang.points
output:
array([[ 0. , 0. ],
[ 0.5 , 0.8660254 ],
[ 1. , 1.73205081],
[ 1. , 0. ],
[ 1.5 , 0.8660254 ],
[ 2. , 1.73205081],
[ 2. , 0. ],
[ 2.5 , 0.8660254 ],
[ 3. , 1.73205081]])
simple, just an array of all nine points in the lattice illustrated above. How let's look at:
triang.vertices
output:
array([[4, 3, 6],
[5, 4, 2],
[1, 3, 0],
[1, 4, 2],
[1, 4, 3],
[7, 4, 6],
[7, 5, 8],
[7, 5, 4]], dtype=int32)
In this array, each row represents one simplex (triangle) in the triangulation. The three entries in each row are the indices of the vertices of that simplex in the points array we just saw. So for example the first simplex in this array, [4, 3, 6] is composed of the points:
[ 1.5 , 0.8660254 ]
[ 1. , 0. ]
[ 2. , 0. ]
Its easy to see this by drawing the lattice on a piece of paper, labeling each point according to its index, and then tracing through each row in triang.vertices.
This is all the information we need to write the function I specified in my question.
It looks like:
def find_neighbors(pindex, triang):
neighbors = list()
for simplex in triang.vertices:
if pindex in simplex:
neighbors.extend([simplex[i] for i in range(len(simplex)) if simplex[i] != pindex])
'''
this is a one liner for if a simplex contains the point we`re interested in,
extend the neighbors list by appending all the *other* point indices in the simplex
'''
#now we just have to strip out all the dulicate indices and return the neighbors list:
return list(set(neighbors))
And that's it! I'm sure the function above could do with some optimization, its just what I came up with in a few minutes. If anyone has any suggestions, feel free to post them. Hopefully this helps somebody in the future who is as confused about this as I was.
The methods described above cycle through all the simplices, which could take very long, in case there's a large number of points. A better way might be to use Delaunay.vertex_neighbor_vertices, which already contains all the information about the neighbors. Unfortunately, extracting the information
def find_neighbors(pindex, triang):
return triang.vertex_neighbor_vertices[1][triang.vertex_neighbor_vertices[0][pindex]:triang.vertex_neighbor_vertices[0][pindex+1]]
The following code demonstrates how to get the indices of some vertex (number 17, in this example):
import scipy.spatial
import numpy
import pylab
x_list = numpy.random.random(200)
y_list = numpy.random.random(200)
tri = scipy.spatial.Delaunay(numpy.array([[x,y] for x,y in zip(x_list, y_list)]))
pindex = 17
neighbor_indices = find_neighbors(pindex,tri)
pylab.plot(x_list, y_list, 'b.')
pylab.plot(x_list[pindex], y_list[pindex], 'dg')
pylab.plot([x_list[i] for i in neighbor_indices],
[y_list[i] for i in neighbor_indices], 'ro')
pylab.show()
I know it's been a while since this question was posed. However, I just had the same problem and figured out how to solve it. Just use the (somewhat poorly documented) method vertex_neighbor_vertices of your Delaunay triangulation object (let us call it 'tri').
It will return two arrays:
def get_neighbor_vertex_ids_from_vertex_id(vertex_id, tri):
index_pointers, indices = tri.vertex_neighbor_vertices
result_ids = indices[index_pointers[vertex_id]:index_pointers[vertex_id + 1]]
return result_ids
The neighbor vertices to the point with the index vertex_id are stored somewhere in the second array that I named 'indices'. But where? This is where the first array (which I called 'index_pointers') comes in. The starting position (for the second array 'indices') is index_pointers[vertex_id], the first position past the relevant sub-array is index_pointers[vertex_id+1]. So the solution is indices[index_pointers[vertex_id]:index_pointers[vertex_id+1]]
Here is an ellaboration on #astrofrog answer. This works also in more than 2D.
It took about 300 ms on set of 2430 points in 3D (about 16000 simplices).
from collections import defaultdict
def find_neighbors(tess):
neighbors = defaultdict(set)
for simplex in tess.simplices:
for idx in simplex:
other = set(simplex)
other.remove(idx)
neighbors[idx] = neighbors[idx].union(other)
return neighbors
Here is also a simple one line version of James Porter's own answer using list comprehension:
find_neighbors = lambda x,triang: list(set(indx for simplex in triang.simplices if x in simplex for indx in simplex if indx !=x))
I needed this too and came across the following answer. It turns out that if you need the neighbors for all initial points, it's much more efficient to produce a dictionary of neighbors in one go (the following example is for 2D):
def find_neighbors(tess, points):
neighbors = {}
for point in range(points.shape[0]):
neighbors[point] = []
for simplex in tess.simplices:
neighbors[simplex[0]] += [simplex[1],simplex[2]]
neighbors[simplex[1]] += [simplex[2],simplex[0]]
neighbors[simplex[2]] += [simplex[0],simplex[1]]
return neighbors
The neighbors of point v are then neighbors[v]. For 10,000 points in this takes 370ms to run on my laptop. Maybe others have ideas on optimizing this further?
All the answers here are focused on getting the neighbors for one point (except astrofrog, but that is in 2D and this is 6x faster), however, it's equally expensive to get a mapping for all of the points → all neighbors.
You can do this with
from collections import defaultdict
from itertools import permutations
tri = Delaunay(...)
_neighbors = defaultdict(set)
for simplex in tri.vertices:
for i, j in permutations(simplex, 2):
_neighbors[i].add(j)
points = [tuple(p) for p in tri.points]
neighbors = {}
for k, v in _neighbors.items():
neighbors[points[k]] = [points[i] for i in v]
This works in any dimension and this solution, finding all neighbors of all points, is faster than finding only the neighbors of one point (the excepted answer of James Porter).
Here's mine, it takes around 30ms on a cloud of 11000 points in 2D.
It gives you a 2xP array of indices, where P is the number of pairs of neighbours that exist.
def get_delaunay_neighbour_indices(vertices: "Array['N,D', int]") -> "Array['2,P', int]":
"""
Fine each pair of neighbouring vertices in the delaunay triangulation.
:param vertices: The vertices of the points to perform Delaunay triangulation on
:return: The pairs of indices of vertices
"""
tri = Delaunay(vertices)
spacing_indices, neighbours = tri.vertex_neighbor_vertices
ixs = np.zeros((2, len(neighbours)), dtype=int)
np.add.at(ixs[0], spacing_indices[1:int(np.argmax(spacing_indices))], 1) # The argmax is unfortuantely needed when multiple final elements the same
ixs[0, :] = np.cumsum(ixs[0, :])
ixs[1, :] = neighbours
assert np.max(ixs) < len(vertices)
return ixs
We can find one simplex containing the vertex (tri.vertex_to_simplex[vertex]) and then recursively search the neighbors of this simplex (tri.neighbors) to find other simplices containing the vertex.
from scipy.spatial import Delaunay
tri = Delaunay(points) #points is the list of input points
neighbors =[] #array of neighbors for all vertices
for i in range(points):
vertex = i #vertex index
vertexneighbors = [] #array of neighbors for vertex i
neighbour1 = -1
neighbour2=-1
firstneighbour=-1
neighbour1index = -1
currentsimplexno= tri.vertex_to_simplex[vertex]
for i in range(0,3):
if (tri.simplices[currentsimplexno][i]==vertex):
firstneighbour=tri.simplices[currentsimplexno][(i+1) % 3]
vertexneighbors.append(firstneighbour)
neighbour1index=(i+1) % 3
neighbour1=tri.simplices[currentsimplexno][(i+1) % 3]
neighbour2=tri.simplices[currentsimplexno][(i+2) % 3]
while (neighbour2!=firstneighbour):
vertexneighbors.append(neighbour2)
currentsimplexno= tri.neighbors[currentsimplexno][neighbour1index]
for i in range(0,3):
if (tri.simplices[currentsimplexno][i]==vertex):
neighbour1index=(i+1) % 3
neighbour1=tri.simplices[currentsimplexno][(i+1) % 3]
neighbour2=tri.simplices[currentsimplexno][(i+2) % 3]
neighbors.append(vertexneighbors)
print (neighbors)