igraph isomorphism check misses some isomorphisms - python

I have a rather complicated graph, from which I remove unconnected nodes, decompose it into its subgraphs and then want to count the occurrence of each subgraph.
To do so, I check for each graph if it an isomorphism of any of the already established subgraph "types", and if not add it to the list of subgraph "types".
I have some unexpected results where 2 nodes connected with an edge are not considered isomorphisms to the first group (as an example). I have highlighted the first two rows which graphs I thought were subgraphs to already existing groups:
List of "distinct" subgraphs
MWE of my code, raw pkl of the graph here:
import matplotlib
matplotlib.use('Agg')
import numpy as np
from igraph import Graph
import igraph
import matplotlib.pyplot as plt
from math import ceil
# read graph from file
motif_graph = Graph.Read_Pickle("motif_graph.pkl")
# delete all empty nodes
motif_graph.vs.select(_degree=0).delete()
# get connected clusters
clusters = motif_graph.decompose(mode=igraph.WEAK)
# finds the common motifs
for i in range(0, len(clusters) - 1):
# if the first graph to look at, add it to the list
if i == 0:
list_of_motifs = np.array([[clusters[i], 1]])
continue
# for all subsequent graphs, compare them to all graphs in the
# list_of_motifs until an isomorphic graph has been found
for k in range(0, list_of_motifs.shape[0] - 1):
g = clusters[i]
g1 = list_of_motifs[k, 0]
if g.isomorphic_vf2(g1):
list_of_motifs[k, 1] += 1
break
# if no isomorphic graph has been found, add to list
list_of_motifs = np.vstack((list_of_motifs, np.array([clusters[i], 1])))
# sort list by occurence and list most common at the top
ind = np.argsort(list_of_motifs[:, 1])
sorted_list_of_motifs = np.flip(list_of_motifs[ind], axis=0)
dim = ceil(np.sqrt(len(sorted_list_of_motifs) - 1))
fig, axs = plt.subplots(dim, dim, figsize=(35, 35), dpi=300)
counter = 0
for i in range(0, dim):
for k in range(0, dim):
axs[i, k].set_axis_off()
igraph.plot(sorted_list_of_motifs[counter, 0], target=axs[i, k])
title_string = "Occurences: {}".format(sorted_list_of_motifs[counter, 1])
axs[i, k].set_title(title_string)
counter += 1
if counter == len(sorted_list_of_motifs):
break
plt.savefig("Whole_Graph.png")
plt.close('all')
I also tried the solution presented here with the same result.
Pastebin of MWE2

Related

Generate all digraphs of a given size up to isomorphism

I am trying to generate all directed graphs with a given number of nodes up to graph isomorphism so that I can feed them into another Python program. Here is a naive reference implementation using NetworkX, I would like to speed it up:
from itertools import combinations, product
import networkx as nx
def generate_digraphs(n):
graphs_so_far = list()
nodes = list(range(n))
possible_edges = [(i, j) for i, j in product(nodes, nodes) if i != j]
for edge_mask in product([True, False], repeat=len(possible_edges)):
edges = [edge for include, edge in zip(edge_mask, possible_edges) if include]
g = nx.DiGraph()
g.add_nodes_from(nodes)
g.add_edges_from(edges)
if not any(nx.is_isomorphic(g_before, g) for g_before in graphs_so_far):
graphs_so_far.append(g)
return graphs_so_far
assert len(generate_digraphs(1)) == 1
assert len(generate_digraphs(2)) == 3
assert len(generate_digraphs(3)) == 16
The number of such graphs seems to grow pretty quickly and is given by this OEIS sequence. I am looking for a solution that is able to generate all graphs up to 7 nodes (about a billion graphs in total) in a reasonable amount of time.
Representing a graph as a NetworkX object is not very important; for example, representing a graph with an adjacency list or using a different library is good with me.
There’s a useful idea that I learned from Brendan McKay’s paper
“Isomorph-free exhaustive generation” (though I believe that it predates
that paper).
The idea is that we can organize the isomorphism classes into a tree,
where the singleton class with the empty graph is the root, and each
class with graphs having n > 0 nodes has a parent class with graphs
having n − 1 nodes. To enumerate the isomorphism classes of graphs with
n > 0 nodes, enumerate the isomorphism classes of graphs with n − 1
nodes, and for each such class, extend its representatives in all
possible ways to n nodes and filter out the ones that aren’t actually
children.
The Python code below implements this idea with a rudimentary but
nontrivial graph isomorphism subroutine. It takes a few minutes for n =
6 and (estimating here) on the order of a few days for n = 7. For extra
speed, port it to C++ and maybe find better algorithms for handling the
permutation groups (maybe in TAoCP, though most of the graphs have no
symmetry, so it’s not clear how big the benefit would be).
import cProfile
import collections
import itertools
import random
# Returns labels approximating the orbits of graph. Two nodes in the same orbit
# have the same label, but two nodes in different orbits don't necessarily have
# different labels.
def invariant_labels(graph, n):
labels = [1] * n
for r in range(2):
incoming = [0] * n
outgoing = [0] * n
for i, j in graph:
incoming[j] += labels[i]
outgoing[i] += labels[j]
for i in range(n):
labels[i] = hash((incoming[i], outgoing[i]))
return labels
# Returns the inverse of perm.
def inverse_permutation(perm):
n = len(perm)
inverse = [None] * n
for i in range(n):
inverse[perm[i]] = i
return inverse
# Returns the permutation that sorts by label.
def label_sorting_permutation(labels):
n = len(labels)
return inverse_permutation(sorted(range(n), key=lambda i: labels[i]))
# Returns the graph where node i becomes perm[i] .
def permuted_graph(perm, graph):
perm_graph = [(perm[i], perm[j]) for (i, j) in graph]
perm_graph.sort()
return perm_graph
# Yields each permutation generated by swaps of two consecutive nodes with the
# same label.
def label_stabilizer(labels):
n = len(labels)
factors = (
itertools.permutations(block)
for (_, block) in itertools.groupby(range(n), key=lambda i: labels[i])
)
for subperms in itertools.product(*factors):
yield [i for subperm in subperms for i in subperm]
# Returns the canonical labeled graph isomorphic to graph.
def canonical_graph(graph, n):
labels = invariant_labels(graph, n)
sorting_perm = label_sorting_permutation(labels)
graph = permuted_graph(sorting_perm, graph)
labels.sort()
return max(
(permuted_graph(perm, graph), perm[sorting_perm[n - 1]])
for perm in label_stabilizer(labels)
)
# Returns the list of permutations that stabilize graph.
def graph_stabilizer(graph, n):
return [
perm
for perm in label_stabilizer(invariant_labels(graph, n))
if permuted_graph(perm, graph) == graph
]
# Yields the subsets of range(n) .
def power_set(n):
for r in range(n + 1):
for s in itertools.combinations(range(n), r):
yield list(s)
# Returns the set where i becomes perm[i] .
def permuted_set(perm, s):
perm_s = [perm[i] for i in s]
perm_s.sort()
return perm_s
# If s is canonical, returns the list of permutations in group that stabilize s.
# Otherwise, returns None.
def set_stabilizer(s, group):
stabilizer = []
for perm in group:
perm_s = permuted_set(perm, s)
if perm_s < s:
return None
if perm_s == s:
stabilizer.append(perm)
return stabilizer
# Yields one representative of each isomorphism class.
def enumerate_graphs(n):
assert 0 <= n
if 0 == n:
yield []
return
for subgraph in enumerate_graphs(n - 1):
sub_stab = graph_stabilizer(subgraph, n - 1)
for incoming in power_set(n - 1):
in_stab = set_stabilizer(incoming, sub_stab)
if not in_stab:
continue
for outgoing in power_set(n - 1):
out_stab = set_stabilizer(outgoing, in_stab)
if not out_stab:
continue
graph, i_star = canonical_graph(
subgraph
+ [(i, n - 1) for i in incoming]
+ [(n - 1, j) for j in outgoing],
n,
)
if i_star == n - 1:
yield graph
def test():
print(sum(1 for graph in enumerate_graphs(5)))
cProfile.run("test()")
Instead of using nx.is_isomorphic to compare two graphs G1 and G2, you could also generate all graphs that are isomorphic to G1 and check if the G2 is in this set.
At first, this sounds more cumbersome, but it allows you to not just check if G2 is isomorphic to G1, but also if any graph is isomorphic to G1, whereas nx.is_isomorphic always starts from scratch when comparing two graphs.
To make things easier each graph is just stored as a list of edges.
Two graphs are the same (not isomorphic) if the set of all edges is the same.
Always making sure the list of edges is a sorted tuple makes it so that == tests for exactly that equality and makes the edge lists hashable.
import itertools
def all_digraphs(n):
possible_edges = [
(i, j) for i, j in itertools.product(range(n), repeat=2) if i != j
]
for edge_mask in itertools.product([True, False], repeat=len(possible_edges)):
# The result is already sorted
yield tuple(edge for include, edge in zip(edge_mask, possible_edges) if include)
def unique_digraphs(n):
already_seen = set()
for graph in all_digraphs(n):
if graph not in already_seen:
yield graph
already_seen |= {
tuple(sorted((perm[i], perm[j]) for i, j in graph))
for perm in itertools.permutations(range(n))
}
Compared to the variants from the previous solution this gives the following timings on my machine:
This all looks quite promising, but already for 6 nodes my 16GiB of memory is not enough and the Python process is terminated by the operating system.
I'm sure you can combine this code with generating the graphs in batches for each outdegree_sequence as detailed in the previous answer.
This would allow one to empty already_seen after each batch and reduce the memory consumption drastically.
98-99% of computation time is used for the isomorphism tests, so the name of the game is to reduce the number of necessary tests.
Here, I create the graphs in batches such that graphs have to be tested for isomorphisms only within a batch.
In the first variant (version 2 below), all graphs within a batch have the same number of edges. This leads to appreaciable but moderate improvements in running time (2.5 times faster for graphs of size 4, with larger gains in speed for larger graphs).
In the second variant (version 3 below), all graphs within a batch have the same out-degree sequence. This leads to substantial improvements in running time (35 times faster for graphs of size 4, with larger gains in speed for larger graphs).
In the third variant (version 4 below), all graphs within a batch have the same out-degree sequence. Additionally, within a batch all graphs are sorted by in-degree sequence. This leads to modest improvements in speed compared to version 3 (1.3 times faster for graphs of size 4; 2.1 times faster for graphs of size 5).
#!/usr/bin/env python
"""
Efficient motif generation.
"""
import numpy as np
import matplotlib.pyplot as plt
import networkx as nx
from timeit import timeit
from itertools import combinations, product, chain, combinations_with_replacement
# for profiling with kernprof/line_profiler
try:
profile
except NameError:
profile = lambda x: x
#profile
def version_1(n):
"""Original implementation by #hilberts_drinking_problem"""
graphs_so_far = list()
nodes = list(range(n))
possible_edges = [(i, j) for i, j in product(nodes, nodes) if i != j]
for edge_mask in product([True, False], repeat=len(possible_edges)):
edges = [edge for include, edge in zip(edge_mask, possible_edges) if include]
g = nx.DiGraph()
g.add_nodes_from(nodes)
g.add_edges_from(edges)
if not any(nx.is_isomorphic(g_before, g) for g_before in graphs_so_far):
graphs_so_far.append(g)
return graphs_so_far
#profile
def version_2(n):
"""Creates graphs in batches, where each batch contains graphs with
the same number of edges. Only graphs within a batch have to be tested
for isomorphisms."""
graphs_so_far = list()
nodes = list(range(n))
possible_edges = [(i, j) for i, j in product(nodes, nodes) if i != j]
for ii in range(len(possible_edges)+1):
tmp = []
for edges in combinations(possible_edges, ii):
g = nx.from_edgelist(edges, create_using=nx.DiGraph)
if not any(nx.is_isomorphic(g_before, g) for g_before in tmp):
tmp.append(g)
graphs_so_far.extend(tmp)
return graphs_so_far
#profile
def version_3(n):
"""Creates graphs in batches, where each batch contains graphs with
the same out-degree sequence. Only graphs within a batch have to be tested
for isomorphisms."""
graphs_so_far = list()
outdegree_sequences_so_far = list()
for outdegree_sequence in product(*[range(n) for _ in range(n)]):
# skip degree sequences which we have already seen as the resulting graphs will be isomorphic
if sorted(outdegree_sequence) not in outdegree_sequences_so_far:
tmp = []
for edges in generate_graphs(outdegree_sequence):
g = nx.from_edgelist(edges, create_using=nx.DiGraph)
if not any(nx.is_isomorphic(g_before, g) for g_before in tmp):
tmp.append(g)
graphs_so_far.extend(tmp)
outdegree_sequences_so_far.append(sorted(outdegree_sequence))
return graphs_so_far
def generate_graphs(outdegree_sequence):
"""Generates all directed graphs with a given out-degree sequence."""
for edges in product(*[generate_edges(node, degree, len(outdegree_sequence)) \
for node, degree in enumerate(outdegree_sequence)]):
yield(list(chain(*edges)))
def generate_edges(node, outdegree, total_nodes):
"""Generates all edges for a given node with a given out-degree and a given graph size."""
for targets in combinations(set(range(total_nodes)) - {node}, outdegree):
yield([(node, target) for target in targets])
#profile
def version_4(n):
"""Creates graphs in batches, where each batch contains graphs with
the same out-degree sequence. Within a batch, graphs are sorted
by in-degree sequence, such that only graphs with the same
in-degree sequence have to be tested for isomorphism.
"""
graphs_so_far = list()
for outdegree_sequence in combinations_with_replacement(range(n), n):
tmp = dict()
for edges in generate_graphs(outdegree_sequence):
g = nx.from_edgelist(edges, create_using=nx.DiGraph)
indegree_sequence = tuple(sorted(degree for _, degree in g.in_degree()))
if indegree_sequence in tmp:
if not any(nx.is_isomorphic(g_before, g) for g_before in tmp[indegree_sequence]):
tmp[indegree_sequence].append(g)
else:
tmp[indegree_sequence] = [g]
for graphs in tmp.values():
graphs_so_far.extend(graphs)
return graphs_so_far
if __name__ == '__main__':
order = range(1, 5)
t1 = [timeit(lambda : version_1(n), number=3) for n in order]
t2 = [timeit(lambda : version_2(n), number=3) for n in order]
t3 = [timeit(lambda : version_3(n), number=3) for n in order]
t4 = [timeit(lambda : version_4(n), number=3) for n in order]
fig, ax = plt.subplots()
for ii, t in enumerate([t1, t2, t3, t4]):
ax.plot(t, label=f"Version no. {ii+1}")
ax.set_yscale('log')
ax.set_ylabel('Execution time [s]')
ax.set_xlabel('Graph order')
ax.legend()
plt.show()

recursive function to return a list of all connected nodes, given a certain node from network graph using python

I'm trying to write a function that will return a list of all the connected nodes in a sub-network, given a starting node from subgraph:
for example the following graph has two sub-networks, one red and one green, as shown in the following image:
using python's package called networkx, I've ran the following code:
import networkx as nx
import pandas as pd
import numpy as np
G=nx.Graph()
G.add_node(1)
G.add_node(2)
G.add_node(3)
G.add_node(4)
G.add_node(5)
G.add_node(6)
G.add_edge(1,2)
G.add_edge(2,3)
G.add_edge(1,5)
G.add_edge(4,6)
def recurse(G, z , node):
z.append(node)
n = list(set(G.neighbors(node)) - set(z))
if len(n) == 0:
return []
else:
for i in n:
if i not in z:
z.extend(recurse(G, z, i))
return z
z = []
f = recurse(G,z,1)
print(f)
The function is supposed to return the sub-group -> [1,2,3,5] when given (1) as starting node but it returns [1,2,3,1,2,3]
Any ideas how I can perform this task by tweaking the code or maybe using another method?
Thanks!
In case you're not interested about the order the nodes are visited you could just do DFS and collect visited nodes to set:
def recurse(G, z, node):
z.add(node)
for i in G.neighbors(node):
if i not in z:
recurse(G, z, i)
z = set()
recurse(G,z,1)
print(z) # {1, 2, 3, 5}

How to find two randoms nodes with no edges between them in graph?

I am quite new in python, I want to find two random nodes in network which has no edges between them, But my program sometimes return empty list or more than two nodes.
can anyone help me on this, My program is:
import networkx as nx
import random
n=6
m=10
G=nx.gnm_random_graph( n, m, seed=None, directed=True)
result = []
nodes = random.sample(G.nodes(), 2)
for u in nodes:
for v in nodes:
if u != v and G.has_edge(u,v) is False and G.has_edge(v,u) is False:
result.append((u,v))
else:
nodes = random.sample(G.nodes(), 2)
print(result)
If you just want one pair of nodes, there is no reason to make a list. Just find the pair!
while True:
u, v = random.sample(G.nodes(), 2)
if not (G.has_edge(u, v) or G.has_edge(v, u)):
break
Now use u and v directly.
import networkx as nx
import random
n=6
m=10
G=nx.gnm_random_graph( n, m, seed=None, directed=True)
nodes = G.nodes()
def select_2_random_unconnected_nodes(node_list, graph):
selected_node = random.choice(node_list)
# obtain all the nodes connected to the selected node
connected_nodes = [n for _, n in G.edges(selected_node)]
print(connected_nodes + [selected_node])
# a feasible node is one not in connected_nodes and also not the first selected_node
feasible_nodes = [feasible_n for feasible_n in node_list if feasible_n not in connected_nodes + [selected_node]]
# select a second node from the feasible_nodes list
select_second_node = random.choice(feasible_nodes)
return selected_node, select_second_node

Drawing multiplex graphs with networkx?

I'm trying to visualize a few graphs whose nodes represent different objects. I want to create an image that looks like the one here:
Basically, I need a 3D plot and the ability to draw edges between nodes on the same level or nodes on different levels.
This answer below may not be a complete solution, but is a working demo for rendering 3D graphs using networkx.
networkx as such cannot render 3D graphs. We will have to install mayavi for that to happen.
import networkx as nx
import matplotlib.pyplot as plt
import numpy as np
from mayavi import mlab
import random
def draw_graph3d(graph, graph_colormap='winter', bgcolor = (1, 1, 1),
node_size=0.03,
edge_color=(0.8, 0.8, 0.8), edge_size=0.002,
text_size=0.008, text_color=(0, 0, 0)):
H=nx.Graph()
# add edges
for node, edges in graph.items():
for edge, val in edges.items():
if val == 1:
H.add_edge(node, edge)
G=nx.convert_node_labels_to_integers(H)
graph_pos=nx.spring_layout(G, dim=3)
# numpy array of x,y,z positions in sorted node order
xyz=np.array([graph_pos[v] for v in sorted(G)])
# scalar colors
scalars=np.array(G.nodes())+5
mlab.figure(1, bgcolor=bgcolor)
mlab.clf()
#----------------------------------------------------------------------------
# the x,y, and z co-ordinates are here
# manipulate them to obtain the desired projection perspective
pts = mlab.points3d(xyz[:,0], xyz[:,1], xyz[:,2],
scalars,
scale_factor=node_size,
scale_mode='none',
colormap=graph_colormap,
resolution=20)
#----------------------------------------------------------------------------
for i, (x, y, z) in enumerate(xyz):
label = mlab.text(x, y, str(i), z=z,
width=text_size, name=str(i), color=text_color)
label.property.shadow = True
pts.mlab_source.dataset.lines = np.array(G.edges())
tube = mlab.pipeline.tube(pts, tube_radius=edge_size)
mlab.pipeline.surface(tube, color=edge_color)
mlab.show() # interactive window
# create tangled hypercube
def make_graph(nodes):
def make_link(graph, i1, i2):
graph[i1][i2] = 1
graph[i2][i1] = 1
n = len(nodes)
if n == 1: return {nodes[0]:{}}
nodes1 = nodes[0:n/2]
nodes2 = nodes[n/2:]
G1 = make_graph(nodes1)
G2 = make_graph(nodes2)
# merge G1 and G2 into a single graph
G = dict(G1.items() + G2.items())
# link G1 and G2
random.shuffle(nodes1)
random.shuffle(nodes2)
for i in range(len(nodes1)):
make_link(G, nodes1[i], nodes2[i])
return G
# graph example
nodes = range(10)
graph = make_graph(nodes)
draw_graph3d(graph)
This code was modified from one of the examples here.
Please post the code in this case, when you succeed in reaching the objective.

Python: Calculate Voronoi Tesselation from Scipy's Delaunay Triangulation in 3D

I have about 50,000 data points in 3D on which I have run scipy.spatial.Delaunay from the new scipy (I'm using 0.10) which gives me a very useful triangulation.
Based on: http://en.wikipedia.org/wiki/Delaunay_triangulation (section "Relationship with the Voronoi diagram")
...I was wondering if there is an easy way to get to the "dual graph" of this triangulation, which is the Voronoi Tesselation.
Any clues? My searching around on this seems to show no pre-built in scipy functions, which I find almost strange!
Thanks,
Edward
The adjacency information can be found in the neighbors attribute of the Delaunay object. Unfortunately, the code does not expose the circumcenters to the user at the moment, so you'll have to recompute those yourself.
Also, the Voronoi edges that extend to infinity are not directly obtained in this way. It's still probably possible, but needs some more thinking.
import numpy as np
from scipy.spatial import Delaunay
points = np.random.rand(30, 2)
tri = Delaunay(points)
p = tri.points[tri.vertices]
# Triangle vertices
A = p[:,0,:].T
B = p[:,1,:].T
C = p[:,2,:].T
# See http://en.wikipedia.org/wiki/Circumscribed_circle#Circumscribed_circles_of_triangles
# The following is just a direct transcription of the formula there
a = A - C
b = B - C
def dot2(u, v):
return u[0]*v[0] + u[1]*v[1]
def cross2(u, v, w):
"""u x (v x w)"""
return dot2(u, w)*v - dot2(u, v)*w
def ncross2(u, v):
"""|| u x v ||^2"""
return sq2(u)*sq2(v) - dot2(u, v)**2
def sq2(u):
return dot2(u, u)
cc = cross2(sq2(a) * b - sq2(b) * a, a, b) / (2*ncross2(a, b)) + C
# Grab the Voronoi edges
vc = cc[:,tri.neighbors]
vc[:,tri.neighbors == -1] = np.nan # edges at infinity, plotting those would need more work...
lines = []
lines.extend(zip(cc.T, vc[:,:,0].T))
lines.extend(zip(cc.T, vc[:,:,1].T))
lines.extend(zip(cc.T, vc[:,:,2].T))
# Plot it
import matplotlib.pyplot as plt
from matplotlib.collections import LineCollection
lines = LineCollection(lines, edgecolor='k')
plt.hold(1)
plt.plot(points[:,0], points[:,1], '.')
plt.plot(cc[0], cc[1], '*')
plt.gca().add_collection(lines)
plt.axis('equal')
plt.xlim(-0.1, 1.1)
plt.ylim(-0.1, 1.1)
plt.show()
As I spent a considerable amount of time on this, I'd like to share my solution on how to get the Voronoi polygons instead of just the edges.
The code is at https://gist.github.com/letmaik/8803860 and extends on the solution of tauran.
First, I changed the code to give me vertices and (pairs of) indices (=edges) separately, as many calculations can be simplified when working on indices instead of point coordinates.
Then, in the voronoi_cell_lines method I determine which edges belong to which cells. For that I use the proposed solution of Alink from a related question. That is, for each edge find the two nearest input points (=cells) and create a mapping from that.
The last step is to create the actual polygons (see voronoi_polygons method). First, the outer cells which have dangling edges need to be closed. This is as simple as looking through all edges and checking which ones have only one neighboring edge. There can be either zero or two such edges. In case of two, I then connect these by introducing an additional edge.
Finally, the unordered edges in each cell need to be put into the right order to derive a polygon from them.
The usage is:
P = np.random.random((100,2))
fig = plt.figure(figsize=(4.5,4.5))
axes = plt.subplot(1,1,1)
plt.axis([-0.05,1.05,-0.05,1.05])
vertices, lineIndices = voronoi(P)
cells = voronoi_cell_lines(P, vertices, lineIndices)
polys = voronoi_polygons(cells)
for pIdx, polyIndices in polys.items():
poly = vertices[np.asarray(polyIndices)]
p = matplotlib.patches.Polygon(poly, facecolor=np.random.rand(3,1))
axes.add_patch(p)
X,Y = P[:,0],P[:,1]
plt.scatter(X, Y, marker='.', zorder=2)
plt.axis([-0.05,1.05,-0.05,1.05])
plt.show()
which outputs:
The code is probably not suitable for large numbers of input points and can be improved in some areas. Nevertheless, it may be helpful to others who have similar problems.
I came across the same problem and built a solution out of pv.'s answer and other code snippets I found across the web. The solution returns a complete Voronoi diagram, including the outer lines where no triangle neighbours are present.
#!/usr/bin/env python
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from scipy.spatial import Delaunay
def voronoi(P):
delauny = Delaunay(P)
triangles = delauny.points[delauny.vertices]
lines = []
# Triangle vertices
A = triangles[:, 0]
B = triangles[:, 1]
C = triangles[:, 2]
lines.extend(zip(A, B))
lines.extend(zip(B, C))
lines.extend(zip(C, A))
lines = matplotlib.collections.LineCollection(lines, color='r')
plt.gca().add_collection(lines)
circum_centers = np.array([triangle_csc(tri) for tri in triangles])
segments = []
for i, triangle in enumerate(triangles):
circum_center = circum_centers[i]
for j, neighbor in enumerate(delauny.neighbors[i]):
if neighbor != -1:
segments.append((circum_center, circum_centers[neighbor]))
else:
ps = triangle[(j+1)%3] - triangle[(j-1)%3]
ps = np.array((ps[1], -ps[0]))
middle = (triangle[(j+1)%3] + triangle[(j-1)%3]) * 0.5
di = middle - triangle[j]
ps /= np.linalg.norm(ps)
di /= np.linalg.norm(di)
if np.dot(di, ps) < 0.0:
ps *= -1000.0
else:
ps *= 1000.0
segments.append((circum_center, circum_center + ps))
return segments
def triangle_csc(pts):
rows, cols = pts.shape
A = np.bmat([[2 * np.dot(pts, pts.T), np.ones((rows, 1))],
[np.ones((1, rows)), np.zeros((1, 1))]])
b = np.hstack((np.sum(pts * pts, axis=1), np.ones((1))))
x = np.linalg.solve(A,b)
bary_coords = x[:-1]
return np.sum(pts * np.tile(bary_coords.reshape((pts.shape[0], 1)), (1, pts.shape[1])), axis=0)
if __name__ == '__main__':
P = np.random.random((300,2))
X,Y = P[:,0],P[:,1]
fig = plt.figure(figsize=(4.5,4.5))
axes = plt.subplot(1,1,1)
plt.scatter(X, Y, marker='.')
plt.axis([-0.05,1.05,-0.05,1.05])
segments = voronoi(P)
lines = matplotlib.collections.LineCollection(segments, color='k')
axes.add_collection(lines)
plt.axis([-0.05,1.05,-0.05,1.05])
plt.show()
Black lines = Voronoi diagram, Red lines = Delauny triangles
I do not know of a function to do this, but it does not seem like an overly complicated task.
The Voronoi graph is the junction of the circumcircles, as described in the wikipedia article.
So you could start with a function that finds the center of the circumcircles of a triangle, which is basic mathematics (http://en.wikipedia.org/wiki/Circumscribed_circle).
Then, just join centers of adjacent triangles.

Categories

Resources