textmining graph sentences in python - python

I'm trying to solve a text mining problem in python which consist on:
Target: Create a graph composed of nodes(sentences) by tokenizing a paragraph into sentences, their edges would be their similarity.
This isn't new at all, but the crux of the question is not very treated on the Internet. So, after getting the sentences from a paragraph the interesting point would be to compute a matrix of similarity between sentences(all combinations) to draw the graph.
Is there any package to perform the similairty between several vectors in an easy way?, even with a given list of strings make a graph of similarity...
A reproducible example:
# tokenize into sentences
>>> from nltk import tokenize
>>> p = "Help my code works. The graph isn't still connected. The code computes the relationship in a graph. "
>>> sentences=tokenize.sent_tokenize(p)
['Help my code works.', "The graph isn't still connected.", 'The code computes the relationship in a graph.']
>>> len (sentences)
3
# compute similarity with dice coeffcient.
>>>def dice_coefficient(a, b):
"""dice coefficient 2nt/na + nb."""
a_bigrams = set(a)
b_bigrams = set(b)
overlap = len(a_bigrams & b_bigrams)
return overlap * 2.0/(len(a_bigrams) + len(b_bigrams)
>>>dice_coefficient(sentences[1],sentences[2])
0.918918918918919
So, with this function I can do it manually and later make the graph with the nodes and the edges. But always a global solution (with n sentences) is the best one.
Any suggestion?

The following list comprehension creates a list of tuples where the first two elements are indexes and the last one is the similarity:
edges = [(i,j,dice_coefficient(x,y))
for i,x in enumerate(sentences)
for j,y in enumerate(sentences) if i < j]
You can now remove the edges that are under a certain threshold, and convert the remaining edges into a graph with networkx:
import networkx as nx
G = nx.Graph()
G.add_edges_from((i,j) for i,j,sim in edges if sim >= THRESHOLD)

Related

Find "hubs" in a text-data correlation matrix from fuzzywuzzy

If I have a list of strings, how do I select some 'representative' strings such that between them, they can fuzzy match with all of the strings in the list.
The first step, fuzzy matching all the texts has been done and it looks like this
My idea is to select two or three strings that can act as a representative for the whole set such that if I fuzzy match, I can flag all of them as 1 with a >80 threshold.
Is there a way I can do it?
First, build the binary adjacency matrix of the graph A_ij=1 if fuzzymatch(i,j)>80, 0 otherwise. If you don't care about having a minimal set, you can use a greedy algorithm
#V is wordset nodes, initialize Hubset
Hubset = {}
while {x for x in V x not in Hubset and sum([A[x][j] for j in Hubset]) == 0}:
#Choose "most central node" c in V(e.g. one with most connections to elements not already connected to Hubset)
Hubset.add(c)
return Hubset
The minimal set problem can be formulated as a linear integer program, you can use a MLP solver for this:
Variables: x_1...x_n (x_i 1 if i in hub, 0 if not in hub)
minimize sum_i x_i
subject to
sum_i A[i][j]x_i>=1 for all j (j constraints)
See the networkx documentation for some measures of graph centrality

Checking if a graph contain a given induced subgraph

I'm trying to detect some minimal patterns with properties in random digraphs. Namely, I have a list called patterns of adjacency matrix of various size. For instance, I have [0] (a sink), but also a [0100 0001 1000 0010] (cycle of size 4), [0100, 0010, 0001, 0000] a path of length 3, etc.
When I generate a digraph, I compute all sets that may be new patterns. However, in most of the case it is something that I don't care about: for instance, if the potential new pattern is a cycle of size 5, it does not teach me anything because it has a cycle of length 3 as an induced subgraph.
I suppose one way to do it would look like this:
#D is the adjacency matrix of a possible new pattern
new_pattern = True
for pi in patterns:
k = len(pi)
induced_subgraphs = all_induced_subgraphs(D, k)
for s in induced_subgraphs:
if isomorphic(s, pi):
new_pattern = False
break
where all_induced_subgraphs(D,k) gives all possible induced subgraphs of D of size k, and isomorphic(s,pi) determines if s and pi are isomorphic digraphs.
However, checking all induced subgraphs of a digraph seems absolutely horrible to do. Is there a clever thing to do there?
Thanks to #Stef I learned that this problem has a name
and can be solved using on netwokx with a function described on this page.
Personally I use igraph on my project so I will use this.

How to find trio in list of lists? [duplicate]

I am working with complex networks. I want to find group of nodes which forms a cycle of 3 nodes (or triangles) in a given graph. As my graph contains about million edges, using a simple iterative solution (multiple "for" loop) is not very efficient.
I am using python for my programming, if these is some inbuilt modules for handling these problems, please let me know.
If someone knows any algorithm which can be used for finding triangles in graphs, kindly reply back.
Assuming its an undirected graph, the answer lies in networkx library of python.
if you just need to count triangles, use:
import networkx as nx
tri=nx.triangles(g)
But if you need to know the edge list with triangle (triadic) relationship, use
all_cliques= nx.enumerate_all_cliques(g)
This will give you all cliques (k=1,2,3...max degree - 1)
So, to filter just triangles i.e k=3,
triad_cliques=[x for x in all_cliques if len(x)==3 ]
The triad_cliques will give a edge list with only triangles.
A million edges is quite small. Unless you are doing it thousands of times, just use a naive implementation.
I'll assume that you have a dictionary of node_ids, which point to a sequence of their neighbors, and that the graph is directed.
For example:
nodes = {}
nodes[0] = 1,2
nodes[1] = tuple() # empty tuple
nodes[2] = 1
My solution:
def generate_triangles(nodes):
"""Generate triangles. Weed out duplicates."""
visited_ids = set() # remember the nodes that we have tested already
for node_a_id in nodes:
for node_b_id in nodes[node_a_id]:
if nod_b_id == node_a_id:
raise ValueError # nodes shouldn't point to themselves
if node_b_id in visited_ids:
continue # we should have already found b->a->??->b
for node_c_id in nodes[node_b_id]:
if node_c_id in visited_ids:
continue # we should have already found c->a->b->c
if node_a_id in nodes[node_c_id]:
yield(node_a_id, node_b_id, node_c_id)
visited_ids.add(node_a_id) # don't search a - we already have all those cycles
Checking performance:
from random import randint
n = 1000000
node_list = range(n)
nodes = {}
for node_id in node_list:
node = tuple()
for i in range(randint(0,10)): # add up to 10 neighbors
try:
neighbor_id = node_list[node_id+randint(-5,5)] # pick a nearby node
except:
continue
if not neighbor_id in node:
node = node + (neighbor_id,)
nodes[node_id] = node
cycles = list(generate_triangles(nodes))
print len(cycles)
When I tried it, it took longer to build the random graph than to count the cycles.
You might want to test it though ;) I won't guarantee that it's correct.
You could also look into networkx, which is the big python graph library.
Pretty easy and clear way to do is to use Networkx:
With Networkx you can get the loops of an undirected graph by nx.cycle_basis(G) and then select the ones with 3 nodes
cycls_3 = [c for c in nx.cycle_basis(G) if len(c)==3]
or you can find all the cliques by find_cliques(G) and then select the ones you want (with 3 nodes). cliques are sections of the graph where all the nodes are connected to each other which happens in cycles/loops with 3 nodes.
Even though it isn't efficient, you may want to implement a solution, so use the loops. Write a test so you can get an idea as to how long it takes.
Then, as you try new approaches you can do two things:
1) Make certain that the answer remains the same.
2) See what the improvement is.
Having a faster algorithm that misses something is probably going to be worse than having a slower one.
Once you have the slow test, you can see if you can do this in parallel and see what the performance increase is.
Then, you can see if you can mark all nodes that have less than 3 vertices.
Ideally, you may want to shrink it down to just 100 or so first, so you can draw it, and see what is happening graphically.
Sometimes your brain will see a pattern that isn't as obvious when looking at algorithms.
I don't want to sound harsh, but have you tried to Google it? The first link is a pretty quick algorithm to do that:
http://www.mail-archive.com/algogeeks#googlegroups.com/msg05642.html
And then there is this article on ACM (which you may have access to):
http://portal.acm.org/citation.cfm?id=244866
(and if you don't have access, I am sure if you kindly ask the lady who wrote it, you will get a copy.)
Also, I can imagine a triangle enumeration method based on clique-decomposition, but I don't know if it was described somewhere.
I am working on the same problem of counting number of triangles on undirected graph and wisty's solution works really well in my case. I have modified it a bit so only undirected triangles are counted.
#### function for counting undirected cycles
def generate_triangles(nodes):
visited_ids = set() # mark visited node
for node_a_id in nodes:
temp_visited = set() # to get undirected triangles
for node_b_id in nodes[node_a_id]:
if node_b_id == node_a_id:
raise ValueError # to prevent self-loops, if your graph allows self-loops then you don't need this condition
if node_b_id in visited_ids:
continue
for node_c_id in nodes[node_b_id]:
if node_c_id in visited_ids:
continue
if node_c_id in temp_visited:
continue
if node_a_id in nodes[node_c_id]:
yield(node_a_id, node_b_id, node_c_id)
else:
continue
temp_visited.add(node_b_id)
visited_ids.add(node_a_id)
Of course, you need to use a dictionary for example
#### Test cycles ####
nodes = {}
nodes[0] = [1, 2, 3]
nodes[1] = [0, 2]
nodes[2] = [0, 1, 3]
nodes[3] = [1]
cycles = list(generate_triangles(nodes))
print cycles
Using the code of Wisty, the triangles found will be
[(0, 1, 2), (0, 2, 1), (0, 3, 1), (1, 2, 3)]
which counted the triangle (0, 1, 2) and (0, 2, 1) as two different triangles. With the code I modified, these are counted as only one triangle.
I used this with a relatively small dictionary of under 100 keys and each key has on average 50 values.
Surprised to see no mention of the Networkx triangles function. I know it doesn't necessarily return the groups of nodes that form a triangle, but should be pretty relevant to many who find themselves on this page.
nx.triangles(G) # list of how many triangles each node is part of
sum(nx.triangles(G).values())/3 # total number of triangles
An alternative way to return clumps of nodes would be something like...
for u,v,d in G.edges(data=True):
u_array = adj_m.getrow(u).nonzero()[1] # get lists of all adjacent nodes
v_array = adj_m.getrow(v).nonzero()[1]
# find the intersection of the two sets - these are the third node of the triangle
np.intersect1d(v_array,u_array)
If you don't care about multiple copies of the same triangle in different order then a list of 3-tuples works:
from itertools import combinations as combos
[(n,nbr,nbr2) for n in G for nbr, nbr2 in combos(G[n],2) if nbr in G[nbr2]]
The logic here is to check each pair of neighbors of every node to see if they are connected. G[n] is a fast way to iterate over or look up neighbors.
If you want to get rid of reorderings, turn each triple into a frozenset and make a set of the frozensets:
set(frozenset([n,nbr,nbr2]) for n in G for nbr, nbr2 in combos(G[n]) if nbr in G[nbr2])
If you don't like frozenset and want a list of sets then:
triple_iter = ((n, nbr, nbr2) for n in G for nbr, nbr2 in combos(G[n],2) if nbr in G[nbr2])
triangles = set(frozenset(tri) for tri in triple_iter)
nice_triangles = [set(tri) for tri in triangles]
Do you need to find 'all' of the 'triangles', or just 'some'/'any'?
Or perhaps you just need to test whether a particular node is part of a triangle?
The test is simple - given a node A, are there any two connected nodes B & C that are also directly connected.
If you need to find all of the triangles - specifically, all groups of 3 nodes in which each node is joined to the other two - then you need to check every possible group in a very long running 'for each' loop.
The only optimisation is ensuring that you don't check the same 'group' twice, e.g. if you have already tested that B & C aren't in a group with A, then don't check whether A & C are in a group with B.
This is a more efficient version of Ajay M answer (I would have commented it, but I've not enough reputation).
Indeed the enumerate_all_cliques method of networkx will return all cliques in the graph, irrespectively of their length; hence looping over it may take a lot of time (especially with very dense graphs).
Moreover, once defined for triangles, it's just a matter of parametrization to generalize the method for every clique length so here's a function:
import networkx as nx
def get_cliques_by_length(G, length_clique):
""" Return the list of all cliques in an undirected graph G with length
equal to length_clique. """
cliques = []
for c in nx.enumerate_all_cliques(G) :
if len(c) <= length_clique:
if len(c) == length_clique:
cliques.append(c)
else:
return cliques
# return empty list if nothing is found
return cliques
To get triangles just use get_cliques_by_length(G, 3).
Caveat: this method works only for undirected graphs. Algorithm for cliques in directed graphs are not provided in networkx
i just found that nx.edge_disjoint_paths works to count the triangle contains certain edges. faster than nx.enumerate_all_cliques and nx.cycle_basis.
It returns the edges disjoint paths between source and target.Edge disjoint paths are paths that do not share any edge.
And result-1 is the number of triangles that contain certain edges or between source node and target node.
edge_triangle_dict = {}
for i in g.edges:
edge_triangle_dict[i] = len(list(nx.edge_disjoint_paths(g, i[0], i[1]))-1)

Use Word2vec to determine which two words in a group of words is most similar

I am trying to use the python wrapper around Word2vec. I have a word embedding or group of words which can be seen below and from them I am trying to determine which two words are most similar to each other.
How can I do this?
['architect', 'nurse', 'surgeon', 'grandmother', 'dad']
#rylan-feldspar's answer is generally the correct approach and will work, but you could do this a bit more compactly using standard Python libraries/idioms, especially itertools, a list-comprehension, and sorting functions.
For example, first use combinations() from itertools to generate all pairs of your candidate words:
from itertools import combinations
candidate_words = ['architect', 'nurse', 'surgeon', 'grandmother', 'dad']
all_pairs = combinations(candidate_words, 2)
Then, decorate the pairs with their pairwise similarity:
scored_pairs = [(w2v_model.wv.similarity(p[0], p[1]), p)
for p in all_pairs]
Finally, sort to put the most-similar pair first, and report that score & pair:
sorted_pairs = sorted(scored_pairs, reverse=True)
print(sorted_pairs[0]) # first item is most-similar pair
If you wanted to be compact but a bit less readable, it could be a (long) "1-liner":
print(sorted([(w2v_model.wv.similarity(p[0], p[1]), p)
for p in combinations(candidate_words, 2)
], reverse=True)[0])
Update:
Integrating #ryan-feldspar's suggestion about max(), and going for minimality, this should also work to report the best pair (but not its score):
print(max(combinations(candidate_words, 2),
key=lambda p:w2v_model.wv.similarity(p[0], p[1])))
Given you're using gensim's word2vec, according to your comment:
Load up or train the model for your embeddings and then, on your model, you can call:
min_distance = float('inf')
min_pair = None
word2vec_model_wv = model.wv # Unsure if this can be done in the loop, but just to be safe efficiency-wise
for candidate_word1 in words:
for candidate_word2 in words:
if candidate_word1 == candidate_word2:
continue # ignore when the two words are the same
distance = word2vec_model_wv.distance(candidate_word1, candidate_word2)
if distance < min_distance:
min_pair = (candidate_word1, candidate_word2)
min_distance = distance
https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.WordEmbeddingsKeyedVectors.distance
Could also be similarity (I'm not entirely sure if there's a difference). https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.WordEmbeddingsKeyedVectors.similarity
If similarity gets bigger with closer words, as I'd expect, then you'll want to maximize not minimize and just replace the distance function calls with similarity calls. Basically this is just the simple min/max function over the pairs.

Finding cycle of 3 nodes ( or triangles) in a graph

I am working with complex networks. I want to find group of nodes which forms a cycle of 3 nodes (or triangles) in a given graph. As my graph contains about million edges, using a simple iterative solution (multiple "for" loop) is not very efficient.
I am using python for my programming, if these is some inbuilt modules for handling these problems, please let me know.
If someone knows any algorithm which can be used for finding triangles in graphs, kindly reply back.
Assuming its an undirected graph, the answer lies in networkx library of python.
if you just need to count triangles, use:
import networkx as nx
tri=nx.triangles(g)
But if you need to know the edge list with triangle (triadic) relationship, use
all_cliques= nx.enumerate_all_cliques(g)
This will give you all cliques (k=1,2,3...max degree - 1)
So, to filter just triangles i.e k=3,
triad_cliques=[x for x in all_cliques if len(x)==3 ]
The triad_cliques will give a edge list with only triangles.
A million edges is quite small. Unless you are doing it thousands of times, just use a naive implementation.
I'll assume that you have a dictionary of node_ids, which point to a sequence of their neighbors, and that the graph is directed.
For example:
nodes = {}
nodes[0] = 1,2
nodes[1] = tuple() # empty tuple
nodes[2] = 1
My solution:
def generate_triangles(nodes):
"""Generate triangles. Weed out duplicates."""
visited_ids = set() # remember the nodes that we have tested already
for node_a_id in nodes:
for node_b_id in nodes[node_a_id]:
if nod_b_id == node_a_id:
raise ValueError # nodes shouldn't point to themselves
if node_b_id in visited_ids:
continue # we should have already found b->a->??->b
for node_c_id in nodes[node_b_id]:
if node_c_id in visited_ids:
continue # we should have already found c->a->b->c
if node_a_id in nodes[node_c_id]:
yield(node_a_id, node_b_id, node_c_id)
visited_ids.add(node_a_id) # don't search a - we already have all those cycles
Checking performance:
from random import randint
n = 1000000
node_list = range(n)
nodes = {}
for node_id in node_list:
node = tuple()
for i in range(randint(0,10)): # add up to 10 neighbors
try:
neighbor_id = node_list[node_id+randint(-5,5)] # pick a nearby node
except:
continue
if not neighbor_id in node:
node = node + (neighbor_id,)
nodes[node_id] = node
cycles = list(generate_triangles(nodes))
print len(cycles)
When I tried it, it took longer to build the random graph than to count the cycles.
You might want to test it though ;) I won't guarantee that it's correct.
You could also look into networkx, which is the big python graph library.
Pretty easy and clear way to do is to use Networkx:
With Networkx you can get the loops of an undirected graph by nx.cycle_basis(G) and then select the ones with 3 nodes
cycls_3 = [c for c in nx.cycle_basis(G) if len(c)==3]
or you can find all the cliques by find_cliques(G) and then select the ones you want (with 3 nodes). cliques are sections of the graph where all the nodes are connected to each other which happens in cycles/loops with 3 nodes.
Even though it isn't efficient, you may want to implement a solution, so use the loops. Write a test so you can get an idea as to how long it takes.
Then, as you try new approaches you can do two things:
1) Make certain that the answer remains the same.
2) See what the improvement is.
Having a faster algorithm that misses something is probably going to be worse than having a slower one.
Once you have the slow test, you can see if you can do this in parallel and see what the performance increase is.
Then, you can see if you can mark all nodes that have less than 3 vertices.
Ideally, you may want to shrink it down to just 100 or so first, so you can draw it, and see what is happening graphically.
Sometimes your brain will see a pattern that isn't as obvious when looking at algorithms.
I don't want to sound harsh, but have you tried to Google it? The first link is a pretty quick algorithm to do that:
http://www.mail-archive.com/algogeeks#googlegroups.com/msg05642.html
And then there is this article on ACM (which you may have access to):
http://portal.acm.org/citation.cfm?id=244866
(and if you don't have access, I am sure if you kindly ask the lady who wrote it, you will get a copy.)
Also, I can imagine a triangle enumeration method based on clique-decomposition, but I don't know if it was described somewhere.
I am working on the same problem of counting number of triangles on undirected graph and wisty's solution works really well in my case. I have modified it a bit so only undirected triangles are counted.
#### function for counting undirected cycles
def generate_triangles(nodes):
visited_ids = set() # mark visited node
for node_a_id in nodes:
temp_visited = set() # to get undirected triangles
for node_b_id in nodes[node_a_id]:
if node_b_id == node_a_id:
raise ValueError # to prevent self-loops, if your graph allows self-loops then you don't need this condition
if node_b_id in visited_ids:
continue
for node_c_id in nodes[node_b_id]:
if node_c_id in visited_ids:
continue
if node_c_id in temp_visited:
continue
if node_a_id in nodes[node_c_id]:
yield(node_a_id, node_b_id, node_c_id)
else:
continue
temp_visited.add(node_b_id)
visited_ids.add(node_a_id)
Of course, you need to use a dictionary for example
#### Test cycles ####
nodes = {}
nodes[0] = [1, 2, 3]
nodes[1] = [0, 2]
nodes[2] = [0, 1, 3]
nodes[3] = [1]
cycles = list(generate_triangles(nodes))
print cycles
Using the code of Wisty, the triangles found will be
[(0, 1, 2), (0, 2, 1), (0, 3, 1), (1, 2, 3)]
which counted the triangle (0, 1, 2) and (0, 2, 1) as two different triangles. With the code I modified, these are counted as only one triangle.
I used this with a relatively small dictionary of under 100 keys and each key has on average 50 values.
Surprised to see no mention of the Networkx triangles function. I know it doesn't necessarily return the groups of nodes that form a triangle, but should be pretty relevant to many who find themselves on this page.
nx.triangles(G) # list of how many triangles each node is part of
sum(nx.triangles(G).values())/3 # total number of triangles
An alternative way to return clumps of nodes would be something like...
for u,v,d in G.edges(data=True):
u_array = adj_m.getrow(u).nonzero()[1] # get lists of all adjacent nodes
v_array = adj_m.getrow(v).nonzero()[1]
# find the intersection of the two sets - these are the third node of the triangle
np.intersect1d(v_array,u_array)
If you don't care about multiple copies of the same triangle in different order then a list of 3-tuples works:
from itertools import combinations as combos
[(n,nbr,nbr2) for n in G for nbr, nbr2 in combos(G[n],2) if nbr in G[nbr2]]
The logic here is to check each pair of neighbors of every node to see if they are connected. G[n] is a fast way to iterate over or look up neighbors.
If you want to get rid of reorderings, turn each triple into a frozenset and make a set of the frozensets:
set(frozenset([n,nbr,nbr2]) for n in G for nbr, nbr2 in combos(G[n]) if nbr in G[nbr2])
If you don't like frozenset and want a list of sets then:
triple_iter = ((n, nbr, nbr2) for n in G for nbr, nbr2 in combos(G[n],2) if nbr in G[nbr2])
triangles = set(frozenset(tri) for tri in triple_iter)
nice_triangles = [set(tri) for tri in triangles]
Do you need to find 'all' of the 'triangles', or just 'some'/'any'?
Or perhaps you just need to test whether a particular node is part of a triangle?
The test is simple - given a node A, are there any two connected nodes B & C that are also directly connected.
If you need to find all of the triangles - specifically, all groups of 3 nodes in which each node is joined to the other two - then you need to check every possible group in a very long running 'for each' loop.
The only optimisation is ensuring that you don't check the same 'group' twice, e.g. if you have already tested that B & C aren't in a group with A, then don't check whether A & C are in a group with B.
This is a more efficient version of Ajay M answer (I would have commented it, but I've not enough reputation).
Indeed the enumerate_all_cliques method of networkx will return all cliques in the graph, irrespectively of their length; hence looping over it may take a lot of time (especially with very dense graphs).
Moreover, once defined for triangles, it's just a matter of parametrization to generalize the method for every clique length so here's a function:
import networkx as nx
def get_cliques_by_length(G, length_clique):
""" Return the list of all cliques in an undirected graph G with length
equal to length_clique. """
cliques = []
for c in nx.enumerate_all_cliques(G) :
if len(c) <= length_clique:
if len(c) == length_clique:
cliques.append(c)
else:
return cliques
# return empty list if nothing is found
return cliques
To get triangles just use get_cliques_by_length(G, 3).
Caveat: this method works only for undirected graphs. Algorithm for cliques in directed graphs are not provided in networkx
i just found that nx.edge_disjoint_paths works to count the triangle contains certain edges. faster than nx.enumerate_all_cliques and nx.cycle_basis.
It returns the edges disjoint paths between source and target.Edge disjoint paths are paths that do not share any edge.
And result-1 is the number of triangles that contain certain edges or between source node and target node.
edge_triangle_dict = {}
for i in g.edges:
edge_triangle_dict[i] = len(list(nx.edge_disjoint_paths(g, i[0], i[1]))-1)

Categories

Resources