I try to study and create an automatic cyclical matching algorithm. The system's purpose is to match the destinations of several employee who want to move from their current division to other division (there are several reasons to move such as they want to move to their hometown or to look after their family). Unfortunately, the company protocol allow them to move when they can find someone from other division who want to place in their position. In practice, the past case, the employee have to find their partners by posting their destination on Facebook group and switch with two or more people for creating their cyclical rotation themselves like this:
Mr.A work for division X and he want to move to division Y.
Mr.B work for division Y and he want to move to division Z.
Mr.C work for division Z and he want to move to division X.
In use case, one of them (suppose Mr.A) have to contact Mr.B and Mr.C to make a cyclically move to achieve their goals. There are more than 2,000 employee face this problem in my company (my company have around 30k employee).
Thus, anyone please suggests me that how can I study and start to create an AI system or other algorithms that able to help my friend easily complete their purposes.
Ps. I has experience and familiar with python
thanks in advance.
In real life I assume you will be getting your data through a DB query since you talk about thousands of records. But for a small sample let's just put our requests in a list. The idea is, start with one request and see if we have some other people whose target location is the source location of our request, then explore each of those to see if we have some more, until we find a cycle.
Since you want the longest cycle, and probably need other criteria as well, we need to compute all the possibilities. Then another function will select the best choice.
This will find all possible cycles for a given request:
from dataclasses import dataclass
#dataclass
class Request():
code : int
who : str
source : str
target : str
days : int
requests = [Request(1,'A','X','Y',3),
Request(2,'B','Y','Z',5),
Request(3,'C','Z','X',2),
Request(4,'D','Y','X',3),
Request(5,'E','W','Y',3)]
cycles = []
def find_cycles(basereq):
global cycles
cycles = []
find_cycles2(basereq, basereq.target, [basereq.code])
print(cycles)
def find_cycles2(basereq, pivot, cycle):
global cycles
for otherreq in [r for r in requests if r.target == basereq.source and r.code not in cycle]:
if otherreq.source == pivot:
cycles.append(cycle + [otherreq.code])
else:
find_cycles2(otherreq, pivot, cycle + [otherreq.code])
>>> find_cycles(requests[0])
[[1, 3, 2], [1, 4]]
>>> find_cycles(requests[1])
[[2, 1, 3]]
>>> find_cycles(requests[4])
[]
I have a group of people and for each of them a list of friends, and a list of foes. I want to line them up (no circle as on a table) so that preferrable no enemies but only friends are next to each other.
Example with the Input: https://gist.github.com/solars/53a132e34688cc5f396c
I think I need to use graph coloring to solve this, but I'm not sure how - I think I have to leave out the friends (or foes) list to make it easier and map to a graph.
Does anyone know how to solve such problems and can tell me if I'm on the right path?
Code samples or online examples would also be nice, I don't mind the programming language, I usually use Ruby, Java, Python, Javascript
Thanks a lot for your help!
It is already mentioned in the comments, that this problem is equivalent to travelling salesman problem. I would like to elaborate on that:
Every person is equivalent to a vertex and the edges are between vertices which represents persons who can seat to each other. Now, finding a possible seating arrangement is equivalent to finding a Hamiltonian path in the graph.
So this problem is NPC. The most naive solution would be to try all possible permutations resulting in a O(n!) running time. There are a lot of well known approaches which perform better than O(n!) and are freely available on the web. I would like to mention Held-Karp, which runs in O(n^2*2^n)and is pretty straight forward to code, here in python:
#graph[i] contains all possible neighbors of the i-th person
def held_karp(graph):
n = len(graph)#number of persons
#remember the set of already seated persons (as bitmask) and the last person in the line
#thus a configuration consists of the set of seated persons and the last person in the line
#start with every possible person:
possible=set([(2**i, i) for i in xrange(n)])
#remember the predecessor configuration for every possible configuration:
preds=dict([((2**i, i), (0,-1)) for i in xrange(n)])
#there are maximal n persons in the line - every iterations adds a person
for _ in xrange(n-1):
next_possible=set()
#iterate through all possible configurations
for seated, last in possible:
for neighbor in graph[last]:
bit_mask=2**neighbor
if (bit_mask&seated)==0: #this possible neighbor is not yet seated!
next_config=(seated|bit_mask, neighbor)#add neighbor to the bit mask of seated
next_possible.add(next_config)
preds[next_config]=(seated, last)
possible=next_possible
#now reconstruct the line
if not possible:
return []#it is not possible for all to be seated
line=[]
config=possible.pop() #any configuration in possible has n person seated and is good enough!
while config[1]!=-1:
line.insert(0, config[1])
config=preds[config]#go a step back
return line
Disclaimer: this code is not properly tested, but I hope you can get the gist of it.
I am trying to solve a problem related to graph theory but can't seem to remember/find/understand the proper/best approach so I figured I'd ask the experts...
I have a list of paths from two nodes (1 and 10 in example code). I'm trying to find the minimum number of nodes to remove to cut all paths. I'm also only able to remove certain nodes.
I currently have it implemented (below) as a brute force search. This works fine on my test set but is going to be an issue when scaling up to a graphs that have paths in the 100K and available nodes in the 100 (factorial issue). Right now, I'm not caring about the order I remove nodes in, but I will at some point want to take that into account (switch sets to list in code below).
I believe there should be a way to solve this using a max flow/min cut algorithm. Everything I'm reading though is going way over my head in some way. It's been several (SEVERAL) years since doing this type of stuff and I can't seem to remember anything.
So my questions are:
1) Is there a better way to solve this problem other than testing all combinations and taking the smallest set?
2) If so, can you either explain it or, preferably, give pseudo code to help explain? I'm guessing there is probably a library that already does this in some way (I have been looking and using networkX lately but am open to others)
3) If not (or even of so), suggestions for how to multithread/process solution? I want to try to get every bit of performance I can from computer. (I have found a few good threads on this question I just haven't had a chance to implement so figured I'd ask at same time just in chance. I first want to get everything working properly before optimizing.)
4) General suggestions on making code more "Pythonic" (probably will help with performance too). I know there are improvements I can make and am still new to Python.
Thanks for the help.
#!/usr/bin/env python
def bruteForcePaths(paths, availableNodes, setsTested, testCombination, results, loopId):
#for each node available, we are going to
# check if we have already tested set with node
# if true- move to next node
# if false- remove the paths effected,
# if there are paths left,
# record combo, continue removing with current combo,
# if there are no paths left,
# record success, record combo, continue to next node
#local copy
currentPaths = list(paths)
currentAvailableNodes = list(availableNodes)
currentSetsTested = set(setsTested)
currentTestCombination= set(testCombination)
currentLoopId = loopId+1
print "loop ID: %d" %(currentLoopId)
print "currentAvailableNodes:"
for set1 in currentAvailableNodes:
print " %s" %(set1)
for node in currentAvailableNodes:
#add to the current test set
print "%d-current node: %s current combo: %s" % (currentLoopId, node, currentTestCombination)
currentTestCombination.add(node)
# print "Testing: %s" % currentTestCombination
# print "Sets tested:"
# for set1 in currentSetsTested:
# print " %s" % set1
if currentTestCombination in currentSetsTested:
#we already tested this combination of nodes so go to next node
print "Already test: %s" % currentTestCombination
currentTestCombination.remove(node)
continue
#get all the paths that don't have node in it
currentRemainingPaths = [path for path in currentPaths if not (node in path)]
#if there are no paths left
if len(currentRemainingPaths) == 0:
#save this combination
print "successful combination: %s" % currentTestCombination
results.append(frozenset(currentTestCombination))
#add to remember we tested combo
currentSetsTested.add(frozenset(currentTestCombination))
#now remove the node that was add, and go to the next one
currentTestCombination.remove(node)
else:
#this combo didn't work, save it so we don't test it again
currentSetsTested.add(frozenset(currentTestCombination))
newAvailableNodes = list(currentAvailableNodes)
newAvailableNodes.remove(node)
bruteForcePaths(currentRemainingPaths,
newAvailableNodes,
currentSetsTested,
currentTestCombination,
results,
currentLoopId)
currentTestCombination.remove(node)
print "-------------------"
#need to pass "up" the tested sets from this loop
setsTested.update(currentSetsTested)
return None
if __name__ == '__main__':
testPaths = [
[1,2,14,15,16,18,9,10],
[1,2,24,25,26,28,9,10],
[1,2,34,35,36,38,9,10],
[1,3,44,45,46,48,9,10],
[1,3,54,55,56,58,9,10],
[1,3,64,65,66,68,9,10],
[1,2,14,15,16,7,10],
[1,2,24,7,10],
[1,3,34,35,7,10],
[1,3,44,35,6,10],
]
setsTested = set()
availableNodes = [2, 3, 6, 7, 9]
results = list()
currentTestCombination = set()
bruteForcePaths(testPaths, availableNodes, setsTested, currentTestCombination, results, 0)
print "results:"
for result in sorted(results, key=len):
print result
UPDATE:
I reworked the code using itertool for generating the combinations. It make the code cleaner and faster (and should be easier to multiprocess. Now to try to figure out the dominate nodes as suggested and multiprocess function.
def bruteForcePaths3(paths, availableNodes, results):
#start by taking each combination 2 at a time, then 3, etc
for i in range(1,len(availableNodes)+1):
print "combo number: %d" % i
currentCombos = combinations(availableNodes, i)
for combo in currentCombos:
#get a fresh copy of paths for this combiniation
currentPaths = list(paths)
currentRemainingPaths = []
# print combo
for node in combo:
#determine better way to remove nodes, for now- if it's in, we remove
currentRemainingPaths = [path for path in currentPaths if not (node in path)]
currentPaths = currentRemainingPaths
#if there are no paths left
if len(currentRemainingPaths) == 0:
#save this combination
print combo
results.append(frozenset(combo))
return None
Here is an answer which ignores the list of paths. It just takes a network, a source node, and a target node, and finds the minimum set of nodes within the network, not either source or target, so that removing these nodes disconnects the source from the target.
If I wanted to find the minimum set of edges, I could find out how just by searching for Max-Flow min-cut. Note that the Wikipedia article at http://en.wikipedia.org/wiki/Max-flow_min-cut_theorem#Generalized_max-flow_min-cut_theorem states that there is a generalized max-flow min-cut theorem which considers vertex capacity as well as edge capacity, which is at least encouraging. Note also that edge capacities are given as Cuv, where Cuv is the maximum capacity from u to v. In the diagram they seem to be drawn as u/v. So the edge capacity in the forward direction can be different from the edge capacity in the backward direction.
To disguise a minimum vertex cut problem as a minimum edge cut problem I propose to make use of this asymmetry. First of all give all the existing edges a huge capacity - for example 100 times the number of nodes in the graph. Now replace every vertex X with two vertices Xi and Xo, which I will call the incoming and outgoing vertices. For every edge between X and Y create an edge between Xo and Yi with the existing capacity going forwards but 0 capacity going backwards - these are one-way edges. Now create an edge between Xi and Xo for each X with capacity 1 going forwards and capacity 0 going backwards.
Now run max-flow min-cut on the resulting graph. Because all the original links have huge capacity, the min cut must all be made up of the capacity 1 links (actually the min cut is defined as a division of the set of nodes into two: what you really want is the set of pairs of nodes Xi, Xo with Xi in one half and Xo in the other half, but you can easily get one from the other). If you break these links you disconnect the graph into two parts, as with standard max-flow min-cut, so deleting these nodes will disconnect the source from the target. Because you have the minimum cut, this is the smallest such set of nodes.
If you can find code for max-flow min-cut, such as those pointed to by http://www.cs.sunysb.edu/~algorith/files/network-flow.shtml I would expect that it will give you the min-cut. If not, for instance if you do it by solving a linear programming problem because you happen to have a linear programming solver handy, notice for example from http://www.cse.yorku.ca/~aaw/Wang/MaxFlowMinCutAlg.html that one half of the min cut is the set of nodes reachable from the source when the graph has been modifies to subtract out the edge capacities actually used by the solution - so given just the edge capacities used at max flow you can find it pretty easily.
If the paths were not provided as part of the problem I would agree that there should be some way to do this via http://en.wikipedia.org/wiki/Max-flow_min-cut_theorem, given a sufficiently ingenious network construction. However, because you haven't given any indication as to what is a reasonable path and what is not I am left to worry that a sufficiently malicious opponent might be able to find strange collections of paths which don't arise from any possible network.
In the worst case, this might make your problem as difficult as http://en.wikipedia.org/wiki/Set_cover_problem, in the sense that somebody, given a problem in Set Cover, might be able to find a set of paths and nodes that produces a path-cut problem whose solution can be turned into a solution of the original Set Cover problem.
If so - and I haven't even attempted to prove it - your problem is NP-Complete, but since you have only 100 nodes it is possible that some of the many papers you can find on Set Cover will point at an approach that will work in practice, or can provide a good enough approximation for you. Apart from the Wikipedia article, http://www.cs.sunysb.edu/~algorith/files/set-cover.shtml points you at two implementations, and a quick search finds the following summary at the start of a paper in http://www.ise.ufl.edu/glan/files/2011/12/EJORpaper.pdf:
The SCP is an NP-hard problem in the strong sense (Garey and Johnson, 1979) and many algorithms
have been developed for solving the SCP. The exact algorithms (Fisher and Kedia, 1990; Beasley and
JØrnsten, 1992; Balas and Carrera, 1996) are mostly based on branch-and-bound and branch-and-cut.
Caprara et al. (2000) compared different exact algorithms for the SCP. They show that the best exact
algorithm for the SCP is CPLEX. Since exact methods require substantial computational effort to solve
large-scale SCP instances, heuristic algorithms are often used to find a good or near-optimal solution in a
reasonable time. Greedy algorithms may be the most natural heuristic approach for quickly solving large
combinatorial problems. As for the SCP, the simplest such approach is the greedy algorithm of Chvatal
(1979). Although simple, fast and easy to code, greedy algorithms could rarely generate solutions of good
quality....
Edit: If you want to destroy in fact all paths, and not those from a given list, then max-flow techniques as explained by mcdowella is much better than this approach.
As mentioned by mcdowella, the problem is NP-hard in general. However, the way your example looks, an exact approach might be feasible.
First, you can delete all vertices from the paths that are not available for deletion. Then, reduce the instance by eliminating dominated vertices. For example, every path that contains 15 also contains 2, so it never makes sense to delete 15. In the example if all vertices were available, 2, 3, 9, and 35 dominate all other vertices, so you'd have the problem down to 4 vertices.
Then take a vertex from the shortest path and branch recursively into two cases: delete it (remove all paths containing it) or don't delete it (delete it from all paths). (If the path has length one, omit the second case.) You can then check for dominance again.
This is exponential in the worst case, but might be sufficient for your examples.
I posted a similar question on the ESRI forums, but the posting seems to have gone cold :(
Anyone know how to identify if an ESRI feature class contains curved, (arcs) segment(s)?
I have several thousand feat. classes, something via python would be great!
thanks!
Edit: update
someone at esri has commented that you can detect if a polyline feature contains an arc segment by comparing the "truecentroid" and "centroid" of the feature. if truecent. <> cent: then the feature contains an arc. This is OK, but I'm still lacking a solution for polygons, as the above method does not work.
Here's what we have so far:
lstFCs = arcpy.ListFeatureClasses("*", "Polyline")
for fc in lstFCs:
rows = arcpy.SearchCursor(fc)
print fc
for row in rows:
type = row.Shape
geom = str(type.centroid.X)
truegeom = str(type.trueCentroid.X)
if geom != truegeom:
print row.ObjectID
del row, rows
For the polygons I think you have to first loop through the rings of the polygons to find polylines and then you can examine the polylines as you do in your code.
Sorry I cannot help you any further since I don't have ArcGIS installed (and it is all long time ago I worked with ArcGIS). You might get better response on gis.stackexchange.com.
Try something like this:
import arcpy, json
arcpy.env.workspace = r"..." #Your database
lstFCs = arcpy.ListFeatureClasses("*", "Polyline")
for fc in lstFCs:
rows = arcpy.SearchCursor(fc)
print fc
for row in rows:
jsonGeometry = json.loads(row.Shape.JSON)
if 'curve' in jsonGeometry or 'curveRings' in jsonGeometry:
print row.ObjectID
del row, rows
There is a standard function in Geospatial SQL that is useful for what you want to do. It is called ST_CurvosityIndex(LineString). Applied to a feature LineString gives it an index number between 1-0. 1 means a straight line, 0 a string of lines that have start and end points at the same coordinate.
Unfortunately, I couldn't find any other information on how the algorithm behind this SQL/MM statement was implemented, but I think you can email Alessandro Furieri a.furieri#lqt.it, who developed that SQL statement, asking him about the algorithm that distinguishes how curved the LineString feature is.