Before I describe my problem I'll summarise what I think I'm looking for. I think I need a method for nearest-neighbor searches which are restricted by node type in python (In my case a node represents an atom and the node type represents the element the atom is). So only returning the nearest neighbors of a given type. Maybe I am wording my problem incorrectly. I haven't been able to find any existing methods for this.
I am writing some ring statistics code to find different types of rings for molecular dynamics simulation data. The input data structure is a big array of atom id, atom type, and XYZ positions.
For example.
At the moment I only consider single-element systems (for example graphene, so only Carbon atoms are present). So each node is considered the same type when finding its nearest neighbors and calculating the adjacency matrix.
For this, I am using KDTree and scipy.spatial algorithms and find all atoms within the bond length, r, from any given atom. If an atom is within, r radius of a given atom I consider it connected and then populate and update an adjacency dictionary accordingly.
def create_adjacency_dict(data, r, leaf_size=5, box_size=None):
from scipy.spatial import KDTree
tree = KDTree(data, leafsize=leaf_size,
boxsize=box_size)
all_nn_indices = tree.query_ball_point(data, r, workers=5) # Calculates neighbours within radius r of a point.
adj_dict = {}
for count, item in enumerate(all_nn_indices):
adj_dict[count] = item # Populate adjacency dictionary
for node, nodes in adj_dict.items():
if node in nodes:
nodes.remove(node) # Remove duplicates
adj_dict = {k: set(v) for k, v in adj_dict.items()}
return adj_dict
I would like to expand the code to deal with multi-species systems. For example AB2, AB2C4 etc (Where A,B and C represent different atomic species). However, I am struggling to figure out a nice way to do this.
A
/ \
B B
The obvious method would be to just do a brute force Euclidean approach. My idea is to input the bond types for a molecule, so for AB2 (shown above), you would input something like AB to indicate the different types of bonds to consider, and then the respective bond lengths. Then loop over each atom finding the distance to all other atoms and, for this example of AB2, if an atom of type A is within the bond length of an atom B, consider them connected and populate the adjacency matrix. However, I'd like to be able to use the code on large datasets of 50,000+ atoms, so this method seems wasteful.
I suppose I could still use my current method, but just search for say the 10 nearest neighbors of a given atom and then do a Euclidean search for each atom pair, following the same approach as above. Still seems like a better method would already exist though.
Do better methods already exist for this type of problem? Finding nearest neighbors restricted by node type? Or maybe someone knows a more correct wording of my problem, which is I think one of my issues here.
"Then search the data."
This sounds like that old cartoon where someone points to a humourously complex diagram with a tiny label in the middle that says "Here a miracle happens"
Seriously, I am guessing that this searching is what you need to optimize ( you do not exactly say )
In turn, this suggests that you are doing a linear search through every atom and calculating the distance of each. Could it be so!?
There is a standard answer for this problem, called an octree.
https://en.wikipedia.org/wiki/Octree
A netflix tv miniseries 'The Billion Dollar Code' dramatizes the advantages of this approach https://www.netflix.com/title/81074012
I am very new to programming and might have bitten off more than I can chew. I am trying to create a program that allows me to find the shortest route to visit all of the National Parks by importing a csv file containing the park names and distances between each park. Ideally, I would like it to prompt the user for which park they would like to start with and then run through the other parks to find the shortest distance (e.g. if you wanted to start with Yellowstone, it would find the closest park to Yellowstone, then the closest park to that park, etc., then add up all those distances, returning the total mileage and the order the parks were visited in) I think I need to be importing the csv file as a dictionary so I can use the park names as keys, but I'm not sure then how to work the keys into the algorithm. So far I have the following put together from my limited knowledge:
import csv
import numpy as np
distances = csv.DictReader(open("ds.csv"))
for row in distances:
print(row)
startingPark = input('Which park would you like to test?')
def NN(distanceArray, start):
path = [start]
cost = 0
N = A.shape[0]
mask = np.ones(N, dtype=bool)
mask[start] = False
for i in range(N-1):
last = path[-1]
next_ind = np.argmin(distanceArray[last][mask]) # find minimum of remaining locations
next_loc = np.arange(N)[mask][next_ind] # convert to original location
path.append(next_loc)
mask[next_loc] = False
cost += distanceArray[last, next_loc]
return path, cost
print (NN(distanceArray,0))
I know that I have to change all of the array stuff in the actual algorithm part of the code (that's just some code I was able to find through research on here that I am using as a starting point), but I am unsure of A: how to get it to actually use the input I give and B: how to make the algorithm work with the dictionary instead of with arrays that are input as part of the code itself. I've tried using the documentation and such, but it goes a bit over my head. Obviously I don't want anyone to just do it for me, but I'd appreciate any pointers or resources that people may have. I'm trying very hard to learn, but with no real guidance from anyone that knows what they're doing, I'm having a hard time.
Edit: Here is a sample of the data I have to work with. I pulled all of the distances from Google Maps and input them into a csv file. I think the x's I have in place might also be a problem and might need to be replaced with 0's or something similar, but I haven't gotten to handling that issue yet. (Sorry for not just uploading the picture, not enough rep to post one yet)
https://imgur.com/a/I4c1T
I have a group of people and for each of them a list of friends, and a list of foes. I want to line them up (no circle as on a table) so that preferrable no enemies but only friends are next to each other.
Example with the Input: https://gist.github.com/solars/53a132e34688cc5f396c
I think I need to use graph coloring to solve this, but I'm not sure how - I think I have to leave out the friends (or foes) list to make it easier and map to a graph.
Does anyone know how to solve such problems and can tell me if I'm on the right path?
Code samples or online examples would also be nice, I don't mind the programming language, I usually use Ruby, Java, Python, Javascript
Thanks a lot for your help!
It is already mentioned in the comments, that this problem is equivalent to travelling salesman problem. I would like to elaborate on that:
Every person is equivalent to a vertex and the edges are between vertices which represents persons who can seat to each other. Now, finding a possible seating arrangement is equivalent to finding a Hamiltonian path in the graph.
So this problem is NPC. The most naive solution would be to try all possible permutations resulting in a O(n!) running time. There are a lot of well known approaches which perform better than O(n!) and are freely available on the web. I would like to mention Held-Karp, which runs in O(n^2*2^n)and is pretty straight forward to code, here in python:
#graph[i] contains all possible neighbors of the i-th person
def held_karp(graph):
n = len(graph)#number of persons
#remember the set of already seated persons (as bitmask) and the last person in the line
#thus a configuration consists of the set of seated persons and the last person in the line
#start with every possible person:
possible=set([(2**i, i) for i in xrange(n)])
#remember the predecessor configuration for every possible configuration:
preds=dict([((2**i, i), (0,-1)) for i in xrange(n)])
#there are maximal n persons in the line - every iterations adds a person
for _ in xrange(n-1):
next_possible=set()
#iterate through all possible configurations
for seated, last in possible:
for neighbor in graph[last]:
bit_mask=2**neighbor
if (bit_mask&seated)==0: #this possible neighbor is not yet seated!
next_config=(seated|bit_mask, neighbor)#add neighbor to the bit mask of seated
next_possible.add(next_config)
preds[next_config]=(seated, last)
possible=next_possible
#now reconstruct the line
if not possible:
return []#it is not possible for all to be seated
line=[]
config=possible.pop() #any configuration in possible has n person seated and is good enough!
while config[1]!=-1:
line.insert(0, config[1])
config=preds[config]#go a step back
return line
Disclaimer: this code is not properly tested, but I hope you can get the gist of it.
I'm studying about detection communities in networks.
I'm using networkx and Python and I need to implement this algorithm: http://arxiv.org/pdf/0803.0476.pdf
This is how I tried to solve it:
first I make list of lists which contains as many lists (communities) as there are nodes in graph so that I can find community by index. Then for every node I find their neighbours and calculate modularity gain like this:
q1 = (sum_in+ki_in)/float(2*m) - pow(sum_tot+ki,2)/float(pow(2*m,2))
q2 = sum_in/float(2*m) - pow(sum_tot, 2)/float(pow(2*m,2)) -pow(ki,2)/float(pow(2*m,2))
q = q1 - q2
where
for x in temp: # list of neighbors of node i
sum_in += G.degree(x, weight='weight')
sum_tot += G.in_degree(x, weight='weight')
ki_in += weights[i, x]
ki = G.in_degree(i, weight='weight')
then I find maximum q and move node I to new community.
But this doesn't work, it seems like I made mistake in formula because for large number of nodes algorithm doesn't find community.
Does anywhone know how to solve this?
Do not have access to the link you provided. (I was intended to comment, but I cannot add comment at this point)
You don't need to solve this, the algorithm is already implemented in python in the community package. You can have a look at how they made it in the source code.
If you do have to implement it yourself for an assignment, try to avoid the bad habit of going on stack overflow, you learn more by finding by yourself ;)
I am trying to solve a problem related to graph theory but can't seem to remember/find/understand the proper/best approach so I figured I'd ask the experts...
I have a list of paths from two nodes (1 and 10 in example code). I'm trying to find the minimum number of nodes to remove to cut all paths. I'm also only able to remove certain nodes.
I currently have it implemented (below) as a brute force search. This works fine on my test set but is going to be an issue when scaling up to a graphs that have paths in the 100K and available nodes in the 100 (factorial issue). Right now, I'm not caring about the order I remove nodes in, but I will at some point want to take that into account (switch sets to list in code below).
I believe there should be a way to solve this using a max flow/min cut algorithm. Everything I'm reading though is going way over my head in some way. It's been several (SEVERAL) years since doing this type of stuff and I can't seem to remember anything.
So my questions are:
1) Is there a better way to solve this problem other than testing all combinations and taking the smallest set?
2) If so, can you either explain it or, preferably, give pseudo code to help explain? I'm guessing there is probably a library that already does this in some way (I have been looking and using networkX lately but am open to others)
3) If not (or even of so), suggestions for how to multithread/process solution? I want to try to get every bit of performance I can from computer. (I have found a few good threads on this question I just haven't had a chance to implement so figured I'd ask at same time just in chance. I first want to get everything working properly before optimizing.)
4) General suggestions on making code more "Pythonic" (probably will help with performance too). I know there are improvements I can make and am still new to Python.
Thanks for the help.
#!/usr/bin/env python
def bruteForcePaths(paths, availableNodes, setsTested, testCombination, results, loopId):
#for each node available, we are going to
# check if we have already tested set with node
# if true- move to next node
# if false- remove the paths effected,
# if there are paths left,
# record combo, continue removing with current combo,
# if there are no paths left,
# record success, record combo, continue to next node
#local copy
currentPaths = list(paths)
currentAvailableNodes = list(availableNodes)
currentSetsTested = set(setsTested)
currentTestCombination= set(testCombination)
currentLoopId = loopId+1
print "loop ID: %d" %(currentLoopId)
print "currentAvailableNodes:"
for set1 in currentAvailableNodes:
print " %s" %(set1)
for node in currentAvailableNodes:
#add to the current test set
print "%d-current node: %s current combo: %s" % (currentLoopId, node, currentTestCombination)
currentTestCombination.add(node)
# print "Testing: %s" % currentTestCombination
# print "Sets tested:"
# for set1 in currentSetsTested:
# print " %s" % set1
if currentTestCombination in currentSetsTested:
#we already tested this combination of nodes so go to next node
print "Already test: %s" % currentTestCombination
currentTestCombination.remove(node)
continue
#get all the paths that don't have node in it
currentRemainingPaths = [path for path in currentPaths if not (node in path)]
#if there are no paths left
if len(currentRemainingPaths) == 0:
#save this combination
print "successful combination: %s" % currentTestCombination
results.append(frozenset(currentTestCombination))
#add to remember we tested combo
currentSetsTested.add(frozenset(currentTestCombination))
#now remove the node that was add, and go to the next one
currentTestCombination.remove(node)
else:
#this combo didn't work, save it so we don't test it again
currentSetsTested.add(frozenset(currentTestCombination))
newAvailableNodes = list(currentAvailableNodes)
newAvailableNodes.remove(node)
bruteForcePaths(currentRemainingPaths,
newAvailableNodes,
currentSetsTested,
currentTestCombination,
results,
currentLoopId)
currentTestCombination.remove(node)
print "-------------------"
#need to pass "up" the tested sets from this loop
setsTested.update(currentSetsTested)
return None
if __name__ == '__main__':
testPaths = [
[1,2,14,15,16,18,9,10],
[1,2,24,25,26,28,9,10],
[1,2,34,35,36,38,9,10],
[1,3,44,45,46,48,9,10],
[1,3,54,55,56,58,9,10],
[1,3,64,65,66,68,9,10],
[1,2,14,15,16,7,10],
[1,2,24,7,10],
[1,3,34,35,7,10],
[1,3,44,35,6,10],
]
setsTested = set()
availableNodes = [2, 3, 6, 7, 9]
results = list()
currentTestCombination = set()
bruteForcePaths(testPaths, availableNodes, setsTested, currentTestCombination, results, 0)
print "results:"
for result in sorted(results, key=len):
print result
UPDATE:
I reworked the code using itertool for generating the combinations. It make the code cleaner and faster (and should be easier to multiprocess. Now to try to figure out the dominate nodes as suggested and multiprocess function.
def bruteForcePaths3(paths, availableNodes, results):
#start by taking each combination 2 at a time, then 3, etc
for i in range(1,len(availableNodes)+1):
print "combo number: %d" % i
currentCombos = combinations(availableNodes, i)
for combo in currentCombos:
#get a fresh copy of paths for this combiniation
currentPaths = list(paths)
currentRemainingPaths = []
# print combo
for node in combo:
#determine better way to remove nodes, for now- if it's in, we remove
currentRemainingPaths = [path for path in currentPaths if not (node in path)]
currentPaths = currentRemainingPaths
#if there are no paths left
if len(currentRemainingPaths) == 0:
#save this combination
print combo
results.append(frozenset(combo))
return None
Here is an answer which ignores the list of paths. It just takes a network, a source node, and a target node, and finds the minimum set of nodes within the network, not either source or target, so that removing these nodes disconnects the source from the target.
If I wanted to find the minimum set of edges, I could find out how just by searching for Max-Flow min-cut. Note that the Wikipedia article at http://en.wikipedia.org/wiki/Max-flow_min-cut_theorem#Generalized_max-flow_min-cut_theorem states that there is a generalized max-flow min-cut theorem which considers vertex capacity as well as edge capacity, which is at least encouraging. Note also that edge capacities are given as Cuv, where Cuv is the maximum capacity from u to v. In the diagram they seem to be drawn as u/v. So the edge capacity in the forward direction can be different from the edge capacity in the backward direction.
To disguise a minimum vertex cut problem as a minimum edge cut problem I propose to make use of this asymmetry. First of all give all the existing edges a huge capacity - for example 100 times the number of nodes in the graph. Now replace every vertex X with two vertices Xi and Xo, which I will call the incoming and outgoing vertices. For every edge between X and Y create an edge between Xo and Yi with the existing capacity going forwards but 0 capacity going backwards - these are one-way edges. Now create an edge between Xi and Xo for each X with capacity 1 going forwards and capacity 0 going backwards.
Now run max-flow min-cut on the resulting graph. Because all the original links have huge capacity, the min cut must all be made up of the capacity 1 links (actually the min cut is defined as a division of the set of nodes into two: what you really want is the set of pairs of nodes Xi, Xo with Xi in one half and Xo in the other half, but you can easily get one from the other). If you break these links you disconnect the graph into two parts, as with standard max-flow min-cut, so deleting these nodes will disconnect the source from the target. Because you have the minimum cut, this is the smallest such set of nodes.
If you can find code for max-flow min-cut, such as those pointed to by http://www.cs.sunysb.edu/~algorith/files/network-flow.shtml I would expect that it will give you the min-cut. If not, for instance if you do it by solving a linear programming problem because you happen to have a linear programming solver handy, notice for example from http://www.cse.yorku.ca/~aaw/Wang/MaxFlowMinCutAlg.html that one half of the min cut is the set of nodes reachable from the source when the graph has been modifies to subtract out the edge capacities actually used by the solution - so given just the edge capacities used at max flow you can find it pretty easily.
If the paths were not provided as part of the problem I would agree that there should be some way to do this via http://en.wikipedia.org/wiki/Max-flow_min-cut_theorem, given a sufficiently ingenious network construction. However, because you haven't given any indication as to what is a reasonable path and what is not I am left to worry that a sufficiently malicious opponent might be able to find strange collections of paths which don't arise from any possible network.
In the worst case, this might make your problem as difficult as http://en.wikipedia.org/wiki/Set_cover_problem, in the sense that somebody, given a problem in Set Cover, might be able to find a set of paths and nodes that produces a path-cut problem whose solution can be turned into a solution of the original Set Cover problem.
If so - and I haven't even attempted to prove it - your problem is NP-Complete, but since you have only 100 nodes it is possible that some of the many papers you can find on Set Cover will point at an approach that will work in practice, or can provide a good enough approximation for you. Apart from the Wikipedia article, http://www.cs.sunysb.edu/~algorith/files/set-cover.shtml points you at two implementations, and a quick search finds the following summary at the start of a paper in http://www.ise.ufl.edu/glan/files/2011/12/EJORpaper.pdf:
The SCP is an NP-hard problem in the strong sense (Garey and Johnson, 1979) and many algorithms
have been developed for solving the SCP. The exact algorithms (Fisher and Kedia, 1990; Beasley and
JØrnsten, 1992; Balas and Carrera, 1996) are mostly based on branch-and-bound and branch-and-cut.
Caprara et al. (2000) compared different exact algorithms for the SCP. They show that the best exact
algorithm for the SCP is CPLEX. Since exact methods require substantial computational effort to solve
large-scale SCP instances, heuristic algorithms are often used to find a good or near-optimal solution in a
reasonable time. Greedy algorithms may be the most natural heuristic approach for quickly solving large
combinatorial problems. As for the SCP, the simplest such approach is the greedy algorithm of Chvatal
(1979). Although simple, fast and easy to code, greedy algorithms could rarely generate solutions of good
quality....
Edit: If you want to destroy in fact all paths, and not those from a given list, then max-flow techniques as explained by mcdowella is much better than this approach.
As mentioned by mcdowella, the problem is NP-hard in general. However, the way your example looks, an exact approach might be feasible.
First, you can delete all vertices from the paths that are not available for deletion. Then, reduce the instance by eliminating dominated vertices. For example, every path that contains 15 also contains 2, so it never makes sense to delete 15. In the example if all vertices were available, 2, 3, 9, and 35 dominate all other vertices, so you'd have the problem down to 4 vertices.
Then take a vertex from the shortest path and branch recursively into two cases: delete it (remove all paths containing it) or don't delete it (delete it from all paths). (If the path has length one, omit the second case.) You can then check for dominance again.
This is exponential in the worst case, but might be sufficient for your examples.