How to check whether a graph is an undirected graph? - python

Currently, I am creating a function to check whether a graph is un-directed.
The way, my graphs are stored are in this way. This is a un-directed graph of 3 nodes, 1, 2, 3.
graph = {1: {2:{...}, 3:{...}}, 2: {1:{...}, 3:{...}}, 3: {1:{...}, 2:{...}}}
the {...} represents alternating layers of the dictionaries for the connections in each of the nodes. It is infinitely recurring, since it is nested in each other.
More details about graph:
the keys refer to the node, and it's values refer to a dict, with the nodes that are connected to the key.
Example: two nodes (1, 2) with an undirected edge: graph = {1: {2: {1: {...}}}, 2: {1: {2: {...}}}}
Example2: two nodes (1, 2) with a directed edge from 1 to 2: graph = {1: {2: {}}, 2: {}}
My current way of figuring out whether a graph is un-directed or not, is by checking whether the number of edges in the graph is equal to (n*(n-1))/2 (n represents the number of nodes) , but this cannot differentiate between 15 directed edges and 15 un-directed edges, so what other way can i confirm that my graph is undirected?

First off, I think you're abusing terminology by calling a graph with edges in both directions "undirected". In a real undirected graph, there is no notion of direction to an edge, which often means you don't need redundant direction information in the graph's representation in a computer program. What you have is a directed graph, and you want to see if it could be represented by an undirected graph, even though you're not doing so yet.
I'm not sure there's any easier way to do this than by checking every edge in the graph to see if the reversed edge also exists. This is pretty easy with your graph structure, just loop over the verticies and check if there is a returning edge for every outgoing edge:
def undirected_compatible(graph):
for src, edges in graph.items(): # edges is dict of outgoing edges from src
for dst, dst_edges in edges.items(): # dst_edges is dict of outgoing edges from dst
if src not in dst_edges:
return False
return True
I'd note that a more typical way of describing a graph like yours would be to omit the nested dictionaries and just give a list of destinations for the edges. A fully connected 3-node graph would be:
{1: [2, 3], 2: [1, 3], 3: [1, 2]}
You can get the same information from this graph as your current one, you'd just need an extra indirection to look up the destination node in the top level graph dict, rather than having it be the value of the corresponding key in the edge container already. A version of my function above for this more conventional structure would be:
def undirected_compatible(graph):
for src, edges in graph.items():
for dst in edges:
if src not in graph[dst]:
return False
return True
The not in test may make this slower for large graphs, since searching a list for an item is less asymptotically efficient than checking if a key is in a dictionary. If you needed the higher performance, you could use sets instead of lists, to speed up the membership tests.

Related

How to find trio in list of lists? [duplicate]

I am working with complex networks. I want to find group of nodes which forms a cycle of 3 nodes (or triangles) in a given graph. As my graph contains about million edges, using a simple iterative solution (multiple "for" loop) is not very efficient.
I am using python for my programming, if these is some inbuilt modules for handling these problems, please let me know.
If someone knows any algorithm which can be used for finding triangles in graphs, kindly reply back.
Assuming its an undirected graph, the answer lies in networkx library of python.
if you just need to count triangles, use:
import networkx as nx
tri=nx.triangles(g)
But if you need to know the edge list with triangle (triadic) relationship, use
all_cliques= nx.enumerate_all_cliques(g)
This will give you all cliques (k=1,2,3...max degree - 1)
So, to filter just triangles i.e k=3,
triad_cliques=[x for x in all_cliques if len(x)==3 ]
The triad_cliques will give a edge list with only triangles.
A million edges is quite small. Unless you are doing it thousands of times, just use a naive implementation.
I'll assume that you have a dictionary of node_ids, which point to a sequence of their neighbors, and that the graph is directed.
For example:
nodes = {}
nodes[0] = 1,2
nodes[1] = tuple() # empty tuple
nodes[2] = 1
My solution:
def generate_triangles(nodes):
"""Generate triangles. Weed out duplicates."""
visited_ids = set() # remember the nodes that we have tested already
for node_a_id in nodes:
for node_b_id in nodes[node_a_id]:
if nod_b_id == node_a_id:
raise ValueError # nodes shouldn't point to themselves
if node_b_id in visited_ids:
continue # we should have already found b->a->??->b
for node_c_id in nodes[node_b_id]:
if node_c_id in visited_ids:
continue # we should have already found c->a->b->c
if node_a_id in nodes[node_c_id]:
yield(node_a_id, node_b_id, node_c_id)
visited_ids.add(node_a_id) # don't search a - we already have all those cycles
Checking performance:
from random import randint
n = 1000000
node_list = range(n)
nodes = {}
for node_id in node_list:
node = tuple()
for i in range(randint(0,10)): # add up to 10 neighbors
try:
neighbor_id = node_list[node_id+randint(-5,5)] # pick a nearby node
except:
continue
if not neighbor_id in node:
node = node + (neighbor_id,)
nodes[node_id] = node
cycles = list(generate_triangles(nodes))
print len(cycles)
When I tried it, it took longer to build the random graph than to count the cycles.
You might want to test it though ;) I won't guarantee that it's correct.
You could also look into networkx, which is the big python graph library.
Pretty easy and clear way to do is to use Networkx:
With Networkx you can get the loops of an undirected graph by nx.cycle_basis(G) and then select the ones with 3 nodes
cycls_3 = [c for c in nx.cycle_basis(G) if len(c)==3]
or you can find all the cliques by find_cliques(G) and then select the ones you want (with 3 nodes). cliques are sections of the graph where all the nodes are connected to each other which happens in cycles/loops with 3 nodes.
Even though it isn't efficient, you may want to implement a solution, so use the loops. Write a test so you can get an idea as to how long it takes.
Then, as you try new approaches you can do two things:
1) Make certain that the answer remains the same.
2) See what the improvement is.
Having a faster algorithm that misses something is probably going to be worse than having a slower one.
Once you have the slow test, you can see if you can do this in parallel and see what the performance increase is.
Then, you can see if you can mark all nodes that have less than 3 vertices.
Ideally, you may want to shrink it down to just 100 or so first, so you can draw it, and see what is happening graphically.
Sometimes your brain will see a pattern that isn't as obvious when looking at algorithms.
I don't want to sound harsh, but have you tried to Google it? The first link is a pretty quick algorithm to do that:
http://www.mail-archive.com/algogeeks#googlegroups.com/msg05642.html
And then there is this article on ACM (which you may have access to):
http://portal.acm.org/citation.cfm?id=244866
(and if you don't have access, I am sure if you kindly ask the lady who wrote it, you will get a copy.)
Also, I can imagine a triangle enumeration method based on clique-decomposition, but I don't know if it was described somewhere.
I am working on the same problem of counting number of triangles on undirected graph and wisty's solution works really well in my case. I have modified it a bit so only undirected triangles are counted.
#### function for counting undirected cycles
def generate_triangles(nodes):
visited_ids = set() # mark visited node
for node_a_id in nodes:
temp_visited = set() # to get undirected triangles
for node_b_id in nodes[node_a_id]:
if node_b_id == node_a_id:
raise ValueError # to prevent self-loops, if your graph allows self-loops then you don't need this condition
if node_b_id in visited_ids:
continue
for node_c_id in nodes[node_b_id]:
if node_c_id in visited_ids:
continue
if node_c_id in temp_visited:
continue
if node_a_id in nodes[node_c_id]:
yield(node_a_id, node_b_id, node_c_id)
else:
continue
temp_visited.add(node_b_id)
visited_ids.add(node_a_id)
Of course, you need to use a dictionary for example
#### Test cycles ####
nodes = {}
nodes[0] = [1, 2, 3]
nodes[1] = [0, 2]
nodes[2] = [0, 1, 3]
nodes[3] = [1]
cycles = list(generate_triangles(nodes))
print cycles
Using the code of Wisty, the triangles found will be
[(0, 1, 2), (0, 2, 1), (0, 3, 1), (1, 2, 3)]
which counted the triangle (0, 1, 2) and (0, 2, 1) as two different triangles. With the code I modified, these are counted as only one triangle.
I used this with a relatively small dictionary of under 100 keys and each key has on average 50 values.
Surprised to see no mention of the Networkx triangles function. I know it doesn't necessarily return the groups of nodes that form a triangle, but should be pretty relevant to many who find themselves on this page.
nx.triangles(G) # list of how many triangles each node is part of
sum(nx.triangles(G).values())/3 # total number of triangles
An alternative way to return clumps of nodes would be something like...
for u,v,d in G.edges(data=True):
u_array = adj_m.getrow(u).nonzero()[1] # get lists of all adjacent nodes
v_array = adj_m.getrow(v).nonzero()[1]
# find the intersection of the two sets - these are the third node of the triangle
np.intersect1d(v_array,u_array)
If you don't care about multiple copies of the same triangle in different order then a list of 3-tuples works:
from itertools import combinations as combos
[(n,nbr,nbr2) for n in G for nbr, nbr2 in combos(G[n],2) if nbr in G[nbr2]]
The logic here is to check each pair of neighbors of every node to see if they are connected. G[n] is a fast way to iterate over or look up neighbors.
If you want to get rid of reorderings, turn each triple into a frozenset and make a set of the frozensets:
set(frozenset([n,nbr,nbr2]) for n in G for nbr, nbr2 in combos(G[n]) if nbr in G[nbr2])
If you don't like frozenset and want a list of sets then:
triple_iter = ((n, nbr, nbr2) for n in G for nbr, nbr2 in combos(G[n],2) if nbr in G[nbr2])
triangles = set(frozenset(tri) for tri in triple_iter)
nice_triangles = [set(tri) for tri in triangles]
Do you need to find 'all' of the 'triangles', or just 'some'/'any'?
Or perhaps you just need to test whether a particular node is part of a triangle?
The test is simple - given a node A, are there any two connected nodes B & C that are also directly connected.
If you need to find all of the triangles - specifically, all groups of 3 nodes in which each node is joined to the other two - then you need to check every possible group in a very long running 'for each' loop.
The only optimisation is ensuring that you don't check the same 'group' twice, e.g. if you have already tested that B & C aren't in a group with A, then don't check whether A & C are in a group with B.
This is a more efficient version of Ajay M answer (I would have commented it, but I've not enough reputation).
Indeed the enumerate_all_cliques method of networkx will return all cliques in the graph, irrespectively of their length; hence looping over it may take a lot of time (especially with very dense graphs).
Moreover, once defined for triangles, it's just a matter of parametrization to generalize the method for every clique length so here's a function:
import networkx as nx
def get_cliques_by_length(G, length_clique):
""" Return the list of all cliques in an undirected graph G with length
equal to length_clique. """
cliques = []
for c in nx.enumerate_all_cliques(G) :
if len(c) <= length_clique:
if len(c) == length_clique:
cliques.append(c)
else:
return cliques
# return empty list if nothing is found
return cliques
To get triangles just use get_cliques_by_length(G, 3).
Caveat: this method works only for undirected graphs. Algorithm for cliques in directed graphs are not provided in networkx
i just found that nx.edge_disjoint_paths works to count the triangle contains certain edges. faster than nx.enumerate_all_cliques and nx.cycle_basis.
It returns the edges disjoint paths between source and target.Edge disjoint paths are paths that do not share any edge.
And result-1 is the number of triangles that contain certain edges or between source node and target node.
edge_triangle_dict = {}
for i in g.edges:
edge_triangle_dict[i] = len(list(nx.edge_disjoint_paths(g, i[0], i[1]))-1)

Remove reversible edges from directed graph networkx

Is there a way of removing reversible edges in a graph. For instance, let's say the following graph
import networkx as nx
G=nx.DiGraph()
G.add_edge(1,2)
G.add_edge(2,3)
G.add_edge(2,1)
G.add_edge(3,1)
print (G.edges())
[(1, 2), (2, 3), (2,1), (3,1)]
I want to remove (2,1) and (3,1), since I want the graph to be directed in just one direction. I know you can remove self-loops by using G.remove_edges_from(G.selfloop_edges()) but that's not the case here. The output I am looking for would be [(1, 2), (2, 3)]. Is there a way to remove this edges once the graph is created either by networkx or by other graph tool such as cytoscape?.
Method 1:
remove duplicate entries in edgelist -> remove everything from graph -> add back single edges => graph with single edges
Edges are stored as tuples. You could lose index information via temporary conversion to sets. You could then lose duplicate tuples, again, through temporary conversion to a set. After conversion back to a list, you have your list of edges, with duplicate entries removed, like so:
stripped_list = list(set([tuple(set(edge)) for edge in G.edges()]))
Then remove all edges from the graph that are currently there, and add back those that are in the list just created:
G.remove_edges_from([e for e in G.edges()])
G.add_edges_from(stripped_list)
Method 2:
find duplicate edges -> remove only those from graph => graph with single edges
again, losing positional information via conversion to sets:
set_list = [set(a) for a in G.edges()] # collect all edges, lose positional information
remove_list = [] # initialise
for i in range(len(set_list)):
edge = set_list.pop(0) # look at zeroth element in list:
# if there is still an edge like the current one in the list,
# add the current edge to the remove list:
if set_list.count(edge) > 0:
u,v = edge
# add the reversed edge
remove_list.append((v, u))
# alternatively, add the original edge:
# remove_list.append((u, v))
G.remove_edges_from(remove_list) # remove all edges collected above
As far as I know, networkx does not store the order that edges were added in, so unless you want to write further logic, you either remove all duplicate edges going from nodes with lower number to nodes with higher number, or the other way round.

Python basic spanning tree algorithm

I cannot figure out how to implement a basic spanning tree in Python; an un-weighted spanning tree.
I've learned how to implement an adjacency list:
for edge in adj1:
x, y = edge[int(0)], edge[int(1)]
if x not in adj2: adj2[x] = set()
if y not in adj2: adj2[y] = set()
adj2[x].add(y)
adj2[y].add(x)
print(adj2)
But I don't know how to implement "finding nearest unconnected vertex".
You don't say which spanning-tree algorithm you need to use. DFS? BFS? Prim's with constant weights? Kruskal's with constant weights? Also, what do you mean by "nearest" unconnected vertex, since the graph is unweighted? All the vertices adjacent to a given vertex v will be at the same distance from v. Finally, are you supposed to start at an arbitrary vertex? at 0? at a user-specified vertex? I'll assume 0 and try to push you in the right direction.
You need to have some way to represent which vertices are already in the spanning tree, so that you don't re-add them to the tree along another edge. That would form a cycle, which can't happen in a tree. Since you definitely need a way to represent the tree, like the [ [1], [0,2,3], [1], [1] ] example you gave, one way to start things out is with [ [], [], [], [] ].
You also need to make sure that you eventually connect everything to a single tree, rather than, for example, finishing with two trees that, taken together, cover all the vertices, but that aren't connected to each other via any edge. There are two ways to do this: (1) Start with a single vertex and grow the tree incrementally until it covers all the nodes, so that you never have more than one tree. (2) Add edges in some other way, keeping track of the connected components, so that you can make sure eventually to connect all the components. Unless you know you need (2), I suggest sticking with (1).
Getting around to your specific question: If you have the input inputgraph = [[1,2],[0,2,3],[0,1],[1]], and you start with currtree = [ [], [], [], [] ], and you start at vertex 0, then you look at inputgraph[0], discover that 0 is adjacent to 1 and 2, and pick either (0,1) or (0,2) to be an edge in the tree. Let's say the algorithm you are using tells you to pick (0,1). Then you update currtree to be [ [1], [0], [], [] ]. Your algorithm then directs you either to pick another edge at 0, which in this inputgraph would have to be (0,2), or directs you to pick an edge at 1, which could be (1,2) or (1,3). Note that your algorithm has to keep track of the fact that (1,0) is not an acceptable choice at 1, since (0,1) is already in the tree.
You need to use one of the algorithms I listed above, or some other spanning-tree algorithm, to be systematic about which vertex to examine next, and which edge to pick next.
I hope that gave you an idea of the issues you have to consider and how you can map an abstract algorithm description to running code. I leave learning the algorithm to you!

Fetch connected nodes in a NetworkX graph

Straightforward question: I would like to retrieve all the nodes connected to a given node within a NetworkX graph in order to create a subgraph. In the example shown below, I just want to extract all the nodes inside the circle, given the name of one of any one of them.
I've tried the following recursive function, but hit Python's recursion limit, even though there are only 91 nodes in this network.
Regardless of whether or not the below code is buggy, what is the best way to do what I'm trying to achieve? I will be running this code on graphs of various sizes, and will not know beforehand what the maximum recursion depth will be.
def fetch_connected_nodes(node, neighbors_list):
for neighbor in assembly.neighbors(node):
print(neighbor)
if len(assembly.neighbors(neighbor)) == 1:
neighbors_list.append(neighbor)
return neighbors_list
else:
neighbors_list.append(neighbor)
fetch_connected_nodes(neighbor, neighbors_list)
neighbors = []
starting_node = 'NODE_1_length_6578_cov_450.665_ID_16281'
connected_nodes = fetch_connected_nodes(starting_node, neighbors)
Assuming the graph is undirected, there is a built-in networkx command for this:
node_connected_component(G, n)
The documentation is here. It returns all nodes in the connected component of G containing n.
It's not recursive, but I don't think you actually need or even want that.
comments on your code: You've got a bug that will often result an infinite recursion. If u and v are neighbors both with degree at least 2, then it will start with u, put v in the list and when processing v put u in the list and keep repeating. It needs to change to only process neighbors that are not in neighbors_list. It's expensive to check that, so instead use a set. There's also a small problem if the starting node has degree 1. Your test for degree 1 doesn't do what you're after. If the initial node has degree 1, but its neighbor has higher degree it won't find the neighbor's neighbors.
Here's a modification of your code:
def fetch_connected_nodes(G, node, seen = None):
if seen == None:
seen = set([node])
for neighbor in G.neighbors(node):
print(neighbor)
if neighbor not in seen:
seen.add(neighbor)
fetch_connected_nodes(G, neighbor, seen)
return seen
You call this like fetch_connected_nodes(assembly, starting_node).
You can simply use a Breadth-first search starting from your given node or any node.
In Networkx you can have the tree-graph from your starting node using the function:
bfs_tree(G, source, reverse=False)
Here is a link to the doc: Network bfs_tree.
Here is a recursive algorithm to get all nodes connected to an input node.
def create_subgraph(G,sub_G,start_node):
sub_G.add_node(start_node)
for n in G.neighbors_iter(start_node):
if n not in sub_G.neighbors(start_node):
sub_G.add_path([start_node,n])
create_subgraph(G,sub_G,n)
I believe the key thing here to prevent infinite recursive calls is the condition to check that node which is neighbor in the original graph is not already connected in the sub_G that is being created. Otherwise, you will always be going back and forth and edges between nodes that already have edges.
I tested it as follows:
G = nx.erdos_renyi_graph(20,0.08)
nx.draw(G,with_labels = True)
plt.show()
sub_G = nx.Graph()
create_subgraph(G,sub_G,17)
nx.draw(sub_G,with_labels = True)
plt.show()
You will find in the attached image, the full graph and the sub_graph that contains node 17.

Finding cycle of 3 nodes ( or triangles) in a graph

I am working with complex networks. I want to find group of nodes which forms a cycle of 3 nodes (or triangles) in a given graph. As my graph contains about million edges, using a simple iterative solution (multiple "for" loop) is not very efficient.
I am using python for my programming, if these is some inbuilt modules for handling these problems, please let me know.
If someone knows any algorithm which can be used for finding triangles in graphs, kindly reply back.
Assuming its an undirected graph, the answer lies in networkx library of python.
if you just need to count triangles, use:
import networkx as nx
tri=nx.triangles(g)
But if you need to know the edge list with triangle (triadic) relationship, use
all_cliques= nx.enumerate_all_cliques(g)
This will give you all cliques (k=1,2,3...max degree - 1)
So, to filter just triangles i.e k=3,
triad_cliques=[x for x in all_cliques if len(x)==3 ]
The triad_cliques will give a edge list with only triangles.
A million edges is quite small. Unless you are doing it thousands of times, just use a naive implementation.
I'll assume that you have a dictionary of node_ids, which point to a sequence of their neighbors, and that the graph is directed.
For example:
nodes = {}
nodes[0] = 1,2
nodes[1] = tuple() # empty tuple
nodes[2] = 1
My solution:
def generate_triangles(nodes):
"""Generate triangles. Weed out duplicates."""
visited_ids = set() # remember the nodes that we have tested already
for node_a_id in nodes:
for node_b_id in nodes[node_a_id]:
if nod_b_id == node_a_id:
raise ValueError # nodes shouldn't point to themselves
if node_b_id in visited_ids:
continue # we should have already found b->a->??->b
for node_c_id in nodes[node_b_id]:
if node_c_id in visited_ids:
continue # we should have already found c->a->b->c
if node_a_id in nodes[node_c_id]:
yield(node_a_id, node_b_id, node_c_id)
visited_ids.add(node_a_id) # don't search a - we already have all those cycles
Checking performance:
from random import randint
n = 1000000
node_list = range(n)
nodes = {}
for node_id in node_list:
node = tuple()
for i in range(randint(0,10)): # add up to 10 neighbors
try:
neighbor_id = node_list[node_id+randint(-5,5)] # pick a nearby node
except:
continue
if not neighbor_id in node:
node = node + (neighbor_id,)
nodes[node_id] = node
cycles = list(generate_triangles(nodes))
print len(cycles)
When I tried it, it took longer to build the random graph than to count the cycles.
You might want to test it though ;) I won't guarantee that it's correct.
You could also look into networkx, which is the big python graph library.
Pretty easy and clear way to do is to use Networkx:
With Networkx you can get the loops of an undirected graph by nx.cycle_basis(G) and then select the ones with 3 nodes
cycls_3 = [c for c in nx.cycle_basis(G) if len(c)==3]
or you can find all the cliques by find_cliques(G) and then select the ones you want (with 3 nodes). cliques are sections of the graph where all the nodes are connected to each other which happens in cycles/loops with 3 nodes.
Even though it isn't efficient, you may want to implement a solution, so use the loops. Write a test so you can get an idea as to how long it takes.
Then, as you try new approaches you can do two things:
1) Make certain that the answer remains the same.
2) See what the improvement is.
Having a faster algorithm that misses something is probably going to be worse than having a slower one.
Once you have the slow test, you can see if you can do this in parallel and see what the performance increase is.
Then, you can see if you can mark all nodes that have less than 3 vertices.
Ideally, you may want to shrink it down to just 100 or so first, so you can draw it, and see what is happening graphically.
Sometimes your brain will see a pattern that isn't as obvious when looking at algorithms.
I don't want to sound harsh, but have you tried to Google it? The first link is a pretty quick algorithm to do that:
http://www.mail-archive.com/algogeeks#googlegroups.com/msg05642.html
And then there is this article on ACM (which you may have access to):
http://portal.acm.org/citation.cfm?id=244866
(and if you don't have access, I am sure if you kindly ask the lady who wrote it, you will get a copy.)
Also, I can imagine a triangle enumeration method based on clique-decomposition, but I don't know if it was described somewhere.
I am working on the same problem of counting number of triangles on undirected graph and wisty's solution works really well in my case. I have modified it a bit so only undirected triangles are counted.
#### function for counting undirected cycles
def generate_triangles(nodes):
visited_ids = set() # mark visited node
for node_a_id in nodes:
temp_visited = set() # to get undirected triangles
for node_b_id in nodes[node_a_id]:
if node_b_id == node_a_id:
raise ValueError # to prevent self-loops, if your graph allows self-loops then you don't need this condition
if node_b_id in visited_ids:
continue
for node_c_id in nodes[node_b_id]:
if node_c_id in visited_ids:
continue
if node_c_id in temp_visited:
continue
if node_a_id in nodes[node_c_id]:
yield(node_a_id, node_b_id, node_c_id)
else:
continue
temp_visited.add(node_b_id)
visited_ids.add(node_a_id)
Of course, you need to use a dictionary for example
#### Test cycles ####
nodes = {}
nodes[0] = [1, 2, 3]
nodes[1] = [0, 2]
nodes[2] = [0, 1, 3]
nodes[3] = [1]
cycles = list(generate_triangles(nodes))
print cycles
Using the code of Wisty, the triangles found will be
[(0, 1, 2), (0, 2, 1), (0, 3, 1), (1, 2, 3)]
which counted the triangle (0, 1, 2) and (0, 2, 1) as two different triangles. With the code I modified, these are counted as only one triangle.
I used this with a relatively small dictionary of under 100 keys and each key has on average 50 values.
Surprised to see no mention of the Networkx triangles function. I know it doesn't necessarily return the groups of nodes that form a triangle, but should be pretty relevant to many who find themselves on this page.
nx.triangles(G) # list of how many triangles each node is part of
sum(nx.triangles(G).values())/3 # total number of triangles
An alternative way to return clumps of nodes would be something like...
for u,v,d in G.edges(data=True):
u_array = adj_m.getrow(u).nonzero()[1] # get lists of all adjacent nodes
v_array = adj_m.getrow(v).nonzero()[1]
# find the intersection of the two sets - these are the third node of the triangle
np.intersect1d(v_array,u_array)
If you don't care about multiple copies of the same triangle in different order then a list of 3-tuples works:
from itertools import combinations as combos
[(n,nbr,nbr2) for n in G for nbr, nbr2 in combos(G[n],2) if nbr in G[nbr2]]
The logic here is to check each pair of neighbors of every node to see if they are connected. G[n] is a fast way to iterate over or look up neighbors.
If you want to get rid of reorderings, turn each triple into a frozenset and make a set of the frozensets:
set(frozenset([n,nbr,nbr2]) for n in G for nbr, nbr2 in combos(G[n]) if nbr in G[nbr2])
If you don't like frozenset and want a list of sets then:
triple_iter = ((n, nbr, nbr2) for n in G for nbr, nbr2 in combos(G[n],2) if nbr in G[nbr2])
triangles = set(frozenset(tri) for tri in triple_iter)
nice_triangles = [set(tri) for tri in triangles]
Do you need to find 'all' of the 'triangles', or just 'some'/'any'?
Or perhaps you just need to test whether a particular node is part of a triangle?
The test is simple - given a node A, are there any two connected nodes B & C that are also directly connected.
If you need to find all of the triangles - specifically, all groups of 3 nodes in which each node is joined to the other two - then you need to check every possible group in a very long running 'for each' loop.
The only optimisation is ensuring that you don't check the same 'group' twice, e.g. if you have already tested that B & C aren't in a group with A, then don't check whether A & C are in a group with B.
This is a more efficient version of Ajay M answer (I would have commented it, but I've not enough reputation).
Indeed the enumerate_all_cliques method of networkx will return all cliques in the graph, irrespectively of their length; hence looping over it may take a lot of time (especially with very dense graphs).
Moreover, once defined for triangles, it's just a matter of parametrization to generalize the method for every clique length so here's a function:
import networkx as nx
def get_cliques_by_length(G, length_clique):
""" Return the list of all cliques in an undirected graph G with length
equal to length_clique. """
cliques = []
for c in nx.enumerate_all_cliques(G) :
if len(c) <= length_clique:
if len(c) == length_clique:
cliques.append(c)
else:
return cliques
# return empty list if nothing is found
return cliques
To get triangles just use get_cliques_by_length(G, 3).
Caveat: this method works only for undirected graphs. Algorithm for cliques in directed graphs are not provided in networkx
i just found that nx.edge_disjoint_paths works to count the triangle contains certain edges. faster than nx.enumerate_all_cliques and nx.cycle_basis.
It returns the edges disjoint paths between source and target.Edge disjoint paths are paths that do not share any edge.
And result-1 is the number of triangles that contain certain edges or between source node and target node.
edge_triangle_dict = {}
for i in g.edges:
edge_triangle_dict[i] = len(list(nx.edge_disjoint_paths(g, i[0], i[1]))-1)

Categories

Resources