Detecting community with python and networkx - python

I'm studying about detection communities in networks.
I'm using networkx and Python and I need to implement this algorithm: http://arxiv.org/pdf/0803.0476.pdf
This is how I tried to solve it:
first I make list of lists which contains as many lists (communities) as there are nodes in graph so that I can find community by index. Then for every node I find their neighbours and calculate modularity gain like this:
q1 = (sum_in+ki_in)/float(2*m) - pow(sum_tot+ki,2)/float(pow(2*m,2))
q2 = sum_in/float(2*m) - pow(sum_tot, 2)/float(pow(2*m,2)) -pow(ki,2)/float(pow(2*m,2))
q = q1 - q2
where
for x in temp: # list of neighbors of node i
sum_in += G.degree(x, weight='weight')
sum_tot += G.in_degree(x, weight='weight')
ki_in += weights[i, x]
ki = G.in_degree(i, weight='weight')
then I find maximum q and move node I to new community.
But this doesn't work, it seems like I made mistake in formula because for large number of nodes algorithm doesn't find community.
Does anywhone know how to solve this?

Do not have access to the link you provided. (I was intended to comment, but I cannot add comment at this point)

You don't need to solve this, the algorithm is already implemented in python in the community package. You can have a look at how they made it in the source code.
If you do have to implement it yourself for an assignment, try to avoid the bad habit of going on stack overflow, you learn more by finding by yourself ;)

Related

How can I write a heuristics function for a tsp problem?

I am running a graph search(with different types of search like ucs, bfs, dfs, greedy, and a*) on a tsp problem, I am however running into an issue with greedy and a* search because I am stuck with identifying the heuristics function. I have my connected cities stored as a list of lists called connections. The contents look like this ['newyork', 'albany', 1] which signifies that newyork and albany are connected with the cost of 1 to travel between them.
On a previous implementation of solving this I was provided with a locations dictionary that was of the format 'newyork': (91,492) which signified the different locations of the cities
On a previous implementation this was the method I used to calculate the heuristics:
`
def euclidian (current_node, goal_node):
current_location = locations[current_node.state]
goal_locatinon = locations[goal_node]
distance = math.dist(current_location, goal_locatinon)
return distance
`
Which worked wonderfully!
But now without the locations I am stuck and can't figure out how to do it? can someone please help guide me on the right path?

Programming the nearest neighbor algorithm in Python 3.6

I am very new to programming and might have bitten off more than I can chew. I am trying to create a program that allows me to find the shortest route to visit all of the National Parks by importing a csv file containing the park names and distances between each park. Ideally, I would like it to prompt the user for which park they would like to start with and then run through the other parks to find the shortest distance (e.g. if you wanted to start with Yellowstone, it would find the closest park to Yellowstone, then the closest park to that park, etc., then add up all those distances, returning the total mileage and the order the parks were visited in) I think I need to be importing the csv file as a dictionary so I can use the park names as keys, but I'm not sure then how to work the keys into the algorithm. So far I have the following put together from my limited knowledge:
import csv
import numpy as np
distances = csv.DictReader(open("ds.csv"))
for row in distances:
print(row)
startingPark = input('Which park would you like to test?')
def NN(distanceArray, start):
path = [start]
cost = 0
N = A.shape[0]
mask = np.ones(N, dtype=bool)
mask[start] = False
for i in range(N-1):
last = path[-1]
next_ind = np.argmin(distanceArray[last][mask]) # find minimum of remaining locations
next_loc = np.arange(N)[mask][next_ind] # convert to original location
path.append(next_loc)
mask[next_loc] = False
cost += distanceArray[last, next_loc]
return path, cost
print (NN(distanceArray,0))
I know that I have to change all of the array stuff in the actual algorithm part of the code (that's just some code I was able to find through research on here that I am using as a starting point), but I am unsure of A: how to get it to actually use the input I give and B: how to make the algorithm work with the dictionary instead of with arrays that are input as part of the code itself. I've tried using the documentation and such, but it goes a bit over my head. Obviously I don't want anyone to just do it for me, but I'd appreciate any pointers or resources that people may have. I'm trying very hard to learn, but with no real guidance from anyone that knows what they're doing, I'm having a hard time.
Edit: Here is a sample of the data I have to work with. I pulled all of the distances from Google Maps and input them into a csv file. I think the x's I have in place might also be a problem and might need to be replaced with 0's or something similar, but I haven't gotten to handling that issue yet. (Sorry for not just uploading the picture, not enough rep to post one yet)
https://imgur.com/a/I4c1T

Algorithm to find "good" neighbours - graph coloring?

I have a group of people and for each of them a list of friends, and a list of foes. I want to line them up (no circle as on a table) so that preferrable no enemies but only friends are next to each other.
Example with the Input: https://gist.github.com/solars/53a132e34688cc5f396c
I think I need to use graph coloring to solve this, but I'm not sure how - I think I have to leave out the friends (or foes) list to make it easier and map to a graph.
Does anyone know how to solve such problems and can tell me if I'm on the right path?
Code samples or online examples would also be nice, I don't mind the programming language, I usually use Ruby, Java, Python, Javascript
Thanks a lot for your help!
It is already mentioned in the comments, that this problem is equivalent to travelling salesman problem. I would like to elaborate on that:
Every person is equivalent to a vertex and the edges are between vertices which represents persons who can seat to each other. Now, finding a possible seating arrangement is equivalent to finding a Hamiltonian path in the graph.
So this problem is NPC. The most naive solution would be to try all possible permutations resulting in a O(n!) running time. There are a lot of well known approaches which perform better than O(n!) and are freely available on the web. I would like to mention Held-Karp, which runs in O(n^2*2^n)and is pretty straight forward to code, here in python:
#graph[i] contains all possible neighbors of the i-th person
def held_karp(graph):
n = len(graph)#number of persons
#remember the set of already seated persons (as bitmask) and the last person in the line
#thus a configuration consists of the set of seated persons and the last person in the line
#start with every possible person:
possible=set([(2**i, i) for i in xrange(n)])
#remember the predecessor configuration for every possible configuration:
preds=dict([((2**i, i), (0,-1)) for i in xrange(n)])
#there are maximal n persons in the line - every iterations adds a person
for _ in xrange(n-1):
next_possible=set()
#iterate through all possible configurations
for seated, last in possible:
for neighbor in graph[last]:
bit_mask=2**neighbor
if (bit_mask&seated)==0: #this possible neighbor is not yet seated!
next_config=(seated|bit_mask, neighbor)#add neighbor to the bit mask of seated
next_possible.add(next_config)
preds[next_config]=(seated, last)
possible=next_possible
#now reconstruct the line
if not possible:
return []#it is not possible for all to be seated
line=[]
config=possible.pop() #any configuration in possible has n person seated and is good enough!
while config[1]!=-1:
line.insert(0, config[1])
config=preds[config]#go a step back
return line
Disclaimer: this code is not properly tested, but I hope you can get the gist of it.

Minimum removed nodes required to cut path from A to B algorithm in Python

I am trying to solve a problem related to graph theory but can't seem to remember/find/understand the proper/best approach so I figured I'd ask the experts...
I have a list of paths from two nodes (1 and 10 in example code). I'm trying to find the minimum number of nodes to remove to cut all paths. I'm also only able to remove certain nodes.
I currently have it implemented (below) as a brute force search. This works fine on my test set but is going to be an issue when scaling up to a graphs that have paths in the 100K and available nodes in the 100 (factorial issue). Right now, I'm not caring about the order I remove nodes in, but I will at some point want to take that into account (switch sets to list in code below).
I believe there should be a way to solve this using a max flow/min cut algorithm. Everything I'm reading though is going way over my head in some way. It's been several (SEVERAL) years since doing this type of stuff and I can't seem to remember anything.
So my questions are:
1) Is there a better way to solve this problem other than testing all combinations and taking the smallest set?
2) If so, can you either explain it or, preferably, give pseudo code to help explain? I'm guessing there is probably a library that already does this in some way (I have been looking and using networkX lately but am open to others)
3) If not (or even of so), suggestions for how to multithread/process solution? I want to try to get every bit of performance I can from computer. (I have found a few good threads on this question I just haven't had a chance to implement so figured I'd ask at same time just in chance. I first want to get everything working properly before optimizing.)
4) General suggestions on making code more "Pythonic" (probably will help with performance too). I know there are improvements I can make and am still new to Python.
Thanks for the help.
#!/usr/bin/env python
def bruteForcePaths(paths, availableNodes, setsTested, testCombination, results, loopId):
#for each node available, we are going to
# check if we have already tested set with node
# if true- move to next node
# if false- remove the paths effected,
# if there are paths left,
# record combo, continue removing with current combo,
# if there are no paths left,
# record success, record combo, continue to next node
#local copy
currentPaths = list(paths)
currentAvailableNodes = list(availableNodes)
currentSetsTested = set(setsTested)
currentTestCombination= set(testCombination)
currentLoopId = loopId+1
print "loop ID: %d" %(currentLoopId)
print "currentAvailableNodes:"
for set1 in currentAvailableNodes:
print " %s" %(set1)
for node in currentAvailableNodes:
#add to the current test set
print "%d-current node: %s current combo: %s" % (currentLoopId, node, currentTestCombination)
currentTestCombination.add(node)
# print "Testing: %s" % currentTestCombination
# print "Sets tested:"
# for set1 in currentSetsTested:
# print " %s" % set1
if currentTestCombination in currentSetsTested:
#we already tested this combination of nodes so go to next node
print "Already test: %s" % currentTestCombination
currentTestCombination.remove(node)
continue
#get all the paths that don't have node in it
currentRemainingPaths = [path for path in currentPaths if not (node in path)]
#if there are no paths left
if len(currentRemainingPaths) == 0:
#save this combination
print "successful combination: %s" % currentTestCombination
results.append(frozenset(currentTestCombination))
#add to remember we tested combo
currentSetsTested.add(frozenset(currentTestCombination))
#now remove the node that was add, and go to the next one
currentTestCombination.remove(node)
else:
#this combo didn't work, save it so we don't test it again
currentSetsTested.add(frozenset(currentTestCombination))
newAvailableNodes = list(currentAvailableNodes)
newAvailableNodes.remove(node)
bruteForcePaths(currentRemainingPaths,
newAvailableNodes,
currentSetsTested,
currentTestCombination,
results,
currentLoopId)
currentTestCombination.remove(node)
print "-------------------"
#need to pass "up" the tested sets from this loop
setsTested.update(currentSetsTested)
return None
if __name__ == '__main__':
testPaths = [
[1,2,14,15,16,18,9,10],
[1,2,24,25,26,28,9,10],
[1,2,34,35,36,38,9,10],
[1,3,44,45,46,48,9,10],
[1,3,54,55,56,58,9,10],
[1,3,64,65,66,68,9,10],
[1,2,14,15,16,7,10],
[1,2,24,7,10],
[1,3,34,35,7,10],
[1,3,44,35,6,10],
]
setsTested = set()
availableNodes = [2, 3, 6, 7, 9]
results = list()
currentTestCombination = set()
bruteForcePaths(testPaths, availableNodes, setsTested, currentTestCombination, results, 0)
print "results:"
for result in sorted(results, key=len):
print result
UPDATE:
I reworked the code using itertool for generating the combinations. It make the code cleaner and faster (and should be easier to multiprocess. Now to try to figure out the dominate nodes as suggested and multiprocess function.
def bruteForcePaths3(paths, availableNodes, results):
#start by taking each combination 2 at a time, then 3, etc
for i in range(1,len(availableNodes)+1):
print "combo number: %d" % i
currentCombos = combinations(availableNodes, i)
for combo in currentCombos:
#get a fresh copy of paths for this combiniation
currentPaths = list(paths)
currentRemainingPaths = []
# print combo
for node in combo:
#determine better way to remove nodes, for now- if it's in, we remove
currentRemainingPaths = [path for path in currentPaths if not (node in path)]
currentPaths = currentRemainingPaths
#if there are no paths left
if len(currentRemainingPaths) == 0:
#save this combination
print combo
results.append(frozenset(combo))
return None
Here is an answer which ignores the list of paths. It just takes a network, a source node, and a target node, and finds the minimum set of nodes within the network, not either source or target, so that removing these nodes disconnects the source from the target.
If I wanted to find the minimum set of edges, I could find out how just by searching for Max-Flow min-cut. Note that the Wikipedia article at http://en.wikipedia.org/wiki/Max-flow_min-cut_theorem#Generalized_max-flow_min-cut_theorem states that there is a generalized max-flow min-cut theorem which considers vertex capacity as well as edge capacity, which is at least encouraging. Note also that edge capacities are given as Cuv, where Cuv is the maximum capacity from u to v. In the diagram they seem to be drawn as u/v. So the edge capacity in the forward direction can be different from the edge capacity in the backward direction.
To disguise a minimum vertex cut problem as a minimum edge cut problem I propose to make use of this asymmetry. First of all give all the existing edges a huge capacity - for example 100 times the number of nodes in the graph. Now replace every vertex X with two vertices Xi and Xo, which I will call the incoming and outgoing vertices. For every edge between X and Y create an edge between Xo and Yi with the existing capacity going forwards but 0 capacity going backwards - these are one-way edges. Now create an edge between Xi and Xo for each X with capacity 1 going forwards and capacity 0 going backwards.
Now run max-flow min-cut on the resulting graph. Because all the original links have huge capacity, the min cut must all be made up of the capacity 1 links (actually the min cut is defined as a division of the set of nodes into two: what you really want is the set of pairs of nodes Xi, Xo with Xi in one half and Xo in the other half, but you can easily get one from the other). If you break these links you disconnect the graph into two parts, as with standard max-flow min-cut, so deleting these nodes will disconnect the source from the target. Because you have the minimum cut, this is the smallest such set of nodes.
If you can find code for max-flow min-cut, such as those pointed to by http://www.cs.sunysb.edu/~algorith/files/network-flow.shtml I would expect that it will give you the min-cut. If not, for instance if you do it by solving a linear programming problem because you happen to have a linear programming solver handy, notice for example from http://www.cse.yorku.ca/~aaw/Wang/MaxFlowMinCutAlg.html that one half of the min cut is the set of nodes reachable from the source when the graph has been modifies to subtract out the edge capacities actually used by the solution - so given just the edge capacities used at max flow you can find it pretty easily.
If the paths were not provided as part of the problem I would agree that there should be some way to do this via http://en.wikipedia.org/wiki/Max-flow_min-cut_theorem, given a sufficiently ingenious network construction. However, because you haven't given any indication as to what is a reasonable path and what is not I am left to worry that a sufficiently malicious opponent might be able to find strange collections of paths which don't arise from any possible network.
In the worst case, this might make your problem as difficult as http://en.wikipedia.org/wiki/Set_cover_problem, in the sense that somebody, given a problem in Set Cover, might be able to find a set of paths and nodes that produces a path-cut problem whose solution can be turned into a solution of the original Set Cover problem.
If so - and I haven't even attempted to prove it - your problem is NP-Complete, but since you have only 100 nodes it is possible that some of the many papers you can find on Set Cover will point at an approach that will work in practice, or can provide a good enough approximation for you. Apart from the Wikipedia article, http://www.cs.sunysb.edu/~algorith/files/set-cover.shtml points you at two implementations, and a quick search finds the following summary at the start of a paper in http://www.ise.ufl.edu/glan/files/2011/12/EJORpaper.pdf:
The SCP is an NP-hard problem in the strong sense (Garey and Johnson, 1979) and many algorithms
have been developed for solving the SCP. The exact algorithms (Fisher and Kedia, 1990; Beasley and
JØrnsten, 1992; Balas and Carrera, 1996) are mostly based on branch-and-bound and branch-and-cut.
Caprara et al. (2000) compared different exact algorithms for the SCP. They show that the best exact
algorithm for the SCP is CPLEX. Since exact methods require substantial computational effort to solve
large-scale SCP instances, heuristic algorithms are often used to find a good or near-optimal solution in a
reasonable time. Greedy algorithms may be the most natural heuristic approach for quickly solving large
combinatorial problems. As for the SCP, the simplest such approach is the greedy algorithm of Chvatal
(1979). Although simple, fast and easy to code, greedy algorithms could rarely generate solutions of good
quality....
Edit: If you want to destroy in fact all paths, and not those from a given list, then max-flow techniques as explained by mcdowella is much better than this approach.
As mentioned by mcdowella, the problem is NP-hard in general. However, the way your example looks, an exact approach might be feasible.
First, you can delete all vertices from the paths that are not available for deletion. Then, reduce the instance by eliminating dominated vertices. For example, every path that contains 15 also contains 2, so it never makes sense to delete 15. In the example if all vertices were available, 2, 3, 9, and 35 dominate all other vertices, so you'd have the problem down to 4 vertices.
Then take a vertex from the shortest path and branch recursively into two cases: delete it (remove all paths containing it) or don't delete it (delete it from all paths). (If the path has length one, omit the second case.) You can then check for dominance again.
This is exponential in the worst case, but might be sufficient for your examples.

How to tractably solve the assignment optimisation task

I'm working on a script that takes the elements from companies and pairs them up with the elements of people. The goal is to optimize the pairings such that the sum of all pair values is maximized (the value of each individual pairing is precomputed and stored in the dictionary ctrPairs).
They're all paired in a 1:1, each company has only one person and each person belongs to only one company, and the number of companies is equal to the number of people. I used a top-down approach with a memoization table (memDict) to avoid recomputing areas that have already been solved.
I believe that I could vastly improve the speed of what's going on here but I'm not really sure how. Areas I'm worried about are marked with #slow?, any advice would be appreciated (the script works for inputs of lists n<15 but it gets incredibly slow for n > ~15)
def getMaxCTR(companies, people):
if(memDict.has_key((companies,people))):
return memDict[(companies,people)] #here's where we return the memoized version if it exists
if(not len(companies) or not len(people)):
return 0
maxCTR = None
remainingCompanies = companies[1:len(companies)] #slow?
for p in people:
remainingPeople = list(people) #slow?
remainingPeople.remove(p) #slow?
ctr = ctrPairs[(companies[0],p)] + getMaxCTR(remainingCompanies,tuple(remainingPeople)) #recurse
if(ctr > maxCTR):
maxCTR = ctr
memDict[(companies,people)] = maxCTR
return maxCTR
To all those who wonder about the use of learning theory, this question is a good illustration. The right question is not about a "fast way to bounce between lists and tuples in python" — the reason for the slowness is something deeper.
What you're trying to solve here is known as the assignment problem: given two lists of n elements each and n×n values (the value of each pair), how to assign them so that the total "value" is maximized (or equivalently, minimized). There are several algorithms for this, such as the Hungarian algorithm (Python implementation), or you could solve it using more general min-cost flow algorithms, or even cast it as a linear program and use an LP solver. Most of these would have a running time of O(n3).
What your algorithm above does is to try each possible way of pairing them. (The memoisation only helps to avoid recomputing answers for pairs of subsets, but you're still looking at all pairs of subsets.) This approach is at least Ω(n222n). For n=16, n3 is 4096 and n222n is 1099511627776. There are constant factors in each algorithm of course, but see the difference? :-) (The approach in the question is still better than the naive O(n!), which would be much worse.) Use one of the O(n^3) algorithms, and I predict it should run in time for up to n=10000 or so, instead of just up to n=15.
"Premature optimization is the root of all evil", as Knuth said, but so is delayed/overdue optimization: you should first carefully consider an appropriate algorithm before implementing it, not pick a bad one and then wonder what parts of it are slow. :-) Even badly implementing a good algorithm in Python would be orders of magnitude faster than fixing all the "slow?" parts of the code above (e.g., by rewriting in C).
i see two issues here:
efficiency: you're recreating the same remainingPeople sublists for each company. it would be better to create all the remainingPeople and all the remainingCompanies once and then do all the combinations.
memoization: you're using tuples instead of lists to use them as dict keys for memoization; but tuple identity is order-sensitive. IOW: (1,2) != (2,1) you better use sets and frozensets for this: frozenset((1,2)) == frozenset((2,1))
This line:
remainingCompanies = companies[1:len(companies)]
Can be replaced with this line:
remainingCompanies = companies[1:]
For a very slight speed increase. That's the only improvement I see.
If you want to get a copy of a tuple as a list you can do
mylist = list(mytuple)

Categories

Resources