Graham Scan Convex Hull appending too many vertices - python

As above I am trying to implement a Graham Scan Convex Hull algorithm but I am having trouble with the stack appending too many vertices. The points are read from a .dat file here
For reading points my function is as follows:
def readDataPts(filename, N):
"""Reads the first N lines of data from the input file
and returns a list of N tuples
[(x0,y0), (x1, y1), ...]
"""
count = 0
points = []
listPts = open(filename,"r")
lines = listPts.readlines()
for line in lines:
if count < N:
point_list = line.split()
count += 1
for i in range(0,len(point_list)-1):
points.append((float(point_list[i]),float(point_list[i+1])))
return points
My graham scan is as follows:
def theta(pointA,pointB):
dx = pointB[0] - pointA[0]
dy = pointB[1] - pointA[1]
if abs(dx) < 0.1**5 and abs(dy) < 0.1**5:
t = 0
else:
t = dy/(abs(dx) + abs(dy))
if dx < 0:
t = 2 - t
elif dy < 0:
t = 4 + t
return t*90
def grahamscan(listPts):
"""Returns the convex hull vertices computed using the
Graham-scan algorithm as a list of 'h' tuples
[(u0,v0), (u1,v1), ...]
"""
listPts.sort(key=lambda x: x[1])
p0 = listPts[0]
angles = []
for each in listPts:
angle = theta(p0,each)
angles.append((angle,each))
angles.sort(key=lambda angle: angle[0])
stack = []
for i in range(0,3):
stack.append(angles[i][1])
for i in range(3, len(angles)):
while not (isCCW(stack[-2],stack[-1],angles[i][1])):
stack.pop()
stack.append(angles[i][1])
merge_error = stack[-1]
#stack.remove(merge_error)
#stack.insert(0,merge_error)
return stack #stack becomes track of convex hull
def lineFn(ptA, ptB, ptC):
"""Given three points, the function finds the value which could be used to determine which sides the third point lies"""
val1 = (ptB[0]-ptA[0])*(ptC[1]-ptA[1])
val2 = (ptB[1]-ptA[1])*(ptC[0]-ptA[0])
ans = val1 - val2
return ans
def isCCW(ptA, ptB, ptC):
"""Return True if the third point is on the left side of the line from ptA to ptB and False otherwise"""
ans = lineFn(ptA, ptB, ptC) > 0
return ans
When I run it using the data set taking the first 50 lines as input it produces the stack:
[(599.4, 400.8), (599.0, 514.4), (594.5, 583.9), (550.1, 598.5), (463.3, 597.2), (409.2, 572.5), (406.0, 425.9), (407.3, 410.2), (416.3, 405.3), (485.2, 400.9)]
but it should produce (in this order):
[(599.4, 400.8), (594.5, 583.9), (550.1, 598.5), (472.6, 596.1), (454.2, 589.4), (410.8, 564.2), (416.3, 405.3), (487.7, 401.5)]
Any ideas?

Angle sorting should be made against some extremal reference point (for example, the most bottom and left), that is undoubtedly included in convex hull. But your implementation uses the first point of list as reference.
Wiki excerpt:
swap points[1] with the point with the lowest y-coordinate

Related

Nearest Neighbour Algorithm?

I keep getting an error when trying to execute my function. I think it is to due with calculating the distance between 2 coordinates. For example, [5,2] and [6,7], It isn't able to calculate the distance between these coordinates.
Here is my code:
import math
import copy
def calculate_distance(starting_x, starting_y, destination_x, destination_y):
distance = math.hypot(destination_x - starting_x, destination_y - starting_y) # calculates Euclidean distance (straight-line) distance between two points
return distance
def nearest_neighbour_algorithm(selected_map):
temp_map = copy.deepcopy(selected_map)
optermised_map = []
optermised_map.append(temp_map.pop())
for x in range(len(temp_map)):
nearest_value = 1000
neares_index = 0
for i in range(len(temp_map[x])):
current_value = calculate_distance(*optermised_map[x], *temp_map[x])
if nearest_value > current_value:
nearest_value = current_value
nearest_index = i
optermised_map.append(temp_map[nearest_index])
del temp_map[nearest_index]
return optermised_map
copy_map = generate_map_1(200,200,5)
print("Map Points: ", copy_map)
print("Nearest Neighbour: ", nearest_neighbour_algorithm(copy_map))
The problem is here:
current_value = calculate_distance(optermised_map[x] - temp_map[i])
I am trying to pass 2 coordinates to my calculate distance function but it doesn't allow me to as I get an error
because you can't subtract list from another list
what exactly you want to do subtract each element of temp_map from optermised map ?

Optimizing grid connections with python

I have the following situation:
(1) I have a large grid. By some conditions I want to further observe specific points/cells in this grid. Each cell has an ID and coordinates X, Y seperately. So in this case lets observe one cell only - marked C on the image, that is located on the edge of the grid. By some formula I can get all the neighbouring cells of the first order (marked 1 on the image) and the second order (marked 2 on the image).
(2) With a further condition I identify some cells in the neighbouring cells and are marked in orange on the second image. What I want to do is to connect all orange cells with each other by optimizing the distances and takih into account only min() distances. My first attempt was to observe cells only by calculating the distances to cells of the lower order. So when looking at cells in neighbours cells 2, i'm looking at the cells in 1 only. The solution of connections is presented on image 2, but it's not optimal, since the ideal solution would compare the distances of all cells and not only of the cells of the lower neighbour order. By doing this, i'm getting the situation presented on image 3. And the problem is that the cells are of course not connected to the centre. What to do?
The current code is:
CO - list of centre points.
data - df all all ID's with X,Y values
CO_list = CO['ID'].tolist()
neighbor100 = []
for p in IskanjeCO_list:
d = get_neighbors100k2(p, len(data)) #function that finds the ID's of neighbours of the first order
neighbor100.append(d)
neighbor200 = []
for p in IskanjeCO_list:
d = get_neighbors200k2(p, len(data)) #function that finds the ID's of neighbours of the second order
neighbor200.append(d)
flat100 = []
for i in neighbor100:
for j in i:
flat100.append(j)
flat200 = []
for i in neighbor200:
for j in i:
flat200.append(j)
neighbors100 = flat100
neighbors200 = flat200
data_sosedi100 = data.iloc[flat100,].reset_index(drop=True)
data_sosedi200 = data.iloc[flat200,].reset_index(drop=True)
dist200 = []
for b in flat200:
d = ((pd.DataFrame((data_sosedi100['X']* - data.iloc[b,]['X'])**2
+ (data_sosedi100['Y'] - data.iloc[b,]['Y'])**2 )**0.5)).sum(1)
dist200.append(d.min())
data_sosedi200['dist'] = dist200
data_sosedi200['id'] = None
for e in CO_list:
data_sosedi200.loc[data_sosedi200['FID_2'].isin((get_neighbors200k2(e, len(data)))),'id'] = e
Do you have any suggestion how to optimize this a bit further? I hope i presented the whole image. If needed, I'll clarify further. If you see a part of the code, where i'd be able to furher optimize this loop, i'd be very grateful!
I defined the points manually to work with:
import numpy as np
from operator import itemgetter, attrgetter
nodes = [[-2,1], [-2,0], [-1,0], [0,0], [1,1], [2,1], [2,0], [1,2], [2,2]]
center = [0,0]
def find_neighbor(node):
n=[]
for i in range(-1,2):
for j in range(-1,2):
if not (i ==0 and j ==0):
n.append([node[0]+i,node[1]+j])
return [N for N in n if N in nodes]
def distance_to_center(node):
return np.sqrt(node[0]**2+node[1]**2)
def distance_between_two_nodes(node1, node2):
return np.sqrt((node1[0]-node2[0])**2+(node1[1]-node2[1])**2)
def next_node_closest_to_center(node):
min = distance_to_center(node)
next_node = node
for n in find_neighbor(node):
if distance_to_center(n) < min:
min = distance_to_center(n)
next_node = n
return next_node, min
def get_path_to_center(node):
node_path = [node]
distance = 0.
while node!= center:
new_node = next_node_closest_to_center(node)[0]
distance += distance_between_two_nodes(node, new_node)
node_path.append(new_node)
node=new_node
return node_path,distance
def furthest_nodes_from_center(nodes):
max = 0.
for n in nodes:
if get_path_to_center(n)[1] > max:
furthest_nodes_pathwise = []
max = get_path_to_center(n)[1]
furthest_nodes_pathwise.append(n)
elif get_path_to_center(n)[1] == max:
furthest_nodes_pathwise.append(n)
return furthest_nodes_pathwise
def farthest_node_from_center(nodes):
max = 0.
farthest_node = center
for n in nodes:
if distance_to_center(n) > max:
max = distance_to_center(n)
farthest_node = n
return farthest_node
def closest_node_to_center(nodes):
min = distance_to_center(farthest_node_from_center(nodes))
for n in nodes:
if distance_to_center(n) < min:
min = distance_to_center(n)
closest_node = n
return closest_node
def closest_node_center_with_furthest_distance(node_selection):
if len(node_selection) == 1:
return node_selection[0]
else:
return closest_node_to_center(node_selection)
print(closest_node_center_with_furthest_distance(furthest_nodes_from_center(nodes)))
Output:
[2, 0]
[Finished in 0.266s]
By running on all nodes I can now determine that the furthest node away path-wise but still closest to the center distance wise is [2,0] and not [2,2]. So we start from there. To find the one on the other side just split the data like I said into negative x values and positive. if you run it over a list of only the negative x value cells you will get [-2,1]
Now that you have your 2 starting cells [2,0] and [-2,1] I will leave you to figure out the algorithm to navigate to the center passing by all cells using the steps in my comments (you can now skip step 1 because this is the answer posted)

K-center algorithm

I am currently using python to solving k-center algorithm.
When I run my codes its runtime exceeds the limit time(provided by my teacher),I don't quite know the way to improve my code so it can pass the limited runtime.
My code is below:
import math
# 1.Import group
# 2.Find the most farthest point in this group.
# 3.reassign the rest points between two center points
# 4.Find the most farthest point from its center point, and make it the newest center point
# 5.reassign points among all center points
# 6.Repeat 4 and 5 step untill the answer fits the condition
class point():
def __init__(self,x,y,num,group=[]):
self.x = x
self.y = y
self.id = num
self.group = []
def range_cus(one,two):
return math.sqrt(math.pow((one.x-two.x),2)+math.pow((one.y-two.y),2))
def reassign(all_points,all_answer):
for i in range(len(all_answer)):
all_answer[i].group = []
for i in range(len(all_points)):
if all_points[i] not in all_answer:
min_length = 0
for j in range(len(all_answer)):
current_length = range_cus(all_answer[j],all_points[i])
if min_length == 0:
min_length = current_length
current_group = all_answer[j]
elif current_length < min_length:
min_length = current_length
current_group = all_answer[j]
current_group.group.append(all_points[i])
def search(all_answer,seek_points_number):
if seek_points_number == 0:
return 0
answer_range = 0
for j in range(len(all_answer)):
for i in range(len(all_answer[j].group)):
if range_cus(all_answer[j],all_answer[j].group[i])>answer_range:
answer_range = range_cus(all_answer[j].group[i],all_answer[j])
answer_obj = all_answer[j].group[i]
seek_points_number -= 1
final_answer.append(answer_obj)
reassign(group,final_answer)
search(final_answer,seek_points_number)
info = raw_input().split(',')
info = [int(i) for i in info]
group = []
final_answer = []
for i in range(info[0]):
x = raw_input().split(',')
group.append(point(float(x[0]),float(x[1]),i+1))
final_answer.append(group[info[2]-1])
group[info[2]-1].group = [point for point in group if point not in final_answer]
search(final_answer,info[1]-1)
print ",".join([str(answer.id) for answer in final_answer])
Please help me examine where should the function be revised to save some runtime.
Example input:
10,3,10 #The first number denotes the sets of data.The second denotes the number of answer I want to return.The third denotes the first center point's id.
21.00,38.00
26.00,28.00
45.00,62.00
31.00,51.00
39.00,44.00
42.00,39.00
21.00,27.00
28.00,29.00
31.00,60.00
27.00,54.00
Example output
10,7,6
You can save at least some time by simply rewriting the range_cus function. As you call this function inside a nested loop, it should to be a good point of attack. Try replacing it with
def range_cus(one,two):
return sqrt((one.x - two.x)**2 + (one.y - two.y)**2)
and remember to do from math import sqrt at the top of your program. In this version, you get rid of a lot of lookups on the math object (math.)

Python - speed up pathfinding

This is my pathfinding function:
def get_distance(x1,y1,x2,y2):
neighbors = [(-1,0),(1,0),(0,-1),(0,1)]
old_nodes = [(square_pos[x1,y1],0)]
new_nodes = []
for i in range(50):
for node in old_nodes:
if node[0].x == x2 and node[0].y == y2:
return node[1]
for neighbor in neighbors:
try:
square = square_pos[node[0].x+neighbor[0],node[0].y+neighbor[1]]
if square.lightcycle == None:
new_nodes.append((square,node[1]))
except KeyError:
pass
old_nodes = []
old_nodes = list(new_nodes)
new_nodes = []
nodes = []
return 50
The problem is that the AI takes to long to respond( response time <= 100ms)
This is just a python way of doing https://en.wikipedia.org/wiki/Pathfinding#Sample_algorithm
You should replace your algorithm with A*-search with the Manhattan distance as a heuristic.
One reasonably fast solution is to implement the Dijkstra algorithm (that I have already implemented in that question):
Build the original map. It's a masked array where the walker cannot walk on masked element:
%pylab inline
map_size = (20,20)
MAP = np.ma.masked_array(np.zeros(map_size), np.random.choice([0,1], size=map_size))
matshow(MAP)
Below is the Dijkstra algorithm:
def dijkstra(V):
mask = V.mask
visit_mask = mask.copy() # mask visited cells
m = numpy.ones_like(V) * numpy.inf
connectivity = [(i,j) for i in [-1, 0, 1] for j in [-1, 0, 1] if (not (i == j == 0))]
cc = unravel_index(V.argmin(), m.shape) # current_cell
m[cc] = 0
P = {} # dictionary of predecessors
#while (~visit_mask).sum() > 0:
for _ in range(V.size):
#print cc
neighbors = [tuple(e) for e in asarray(cc) - connectivity
if e[0] > 0 and e[1] > 0 and e[0] < V.shape[0] and e[1] < V.shape[1]]
neighbors = [ e for e in neighbors if not visit_mask[e] ]
tentative_distance = [(V[e]-V[cc])**2 for e in neighbors]
for i,e in enumerate(neighbors):
d = tentative_distance[i] + m[cc]
if d < m[e]:
m[e] = d
P[e] = cc
visit_mask[cc] = True
m_mask = ma.masked_array(m, visit_mask)
cc = unravel_index(m_mask.argmin(), m.shape)
return m, P
def shortestPath(start, end, P):
Path = []
step = end
while 1:
Path.append(step)
if step == start: break
if P.has_key(step):
step = P[step]
else:
break
Path.reverse()
return asarray(Path)
And the result:
start = (2,8)
stop = (17,19)
D, P = dijkstra(MAP)
path = shortestPath(start, stop, P)
imshow(MAP, interpolation='nearest')
plot(path[:,1], path[:,0], 'ro-', linewidth=2.5)
Below some timing statistics:
%timeit dijkstra(MAP)
#10 loops, best of 3: 32.6 ms per loop
The biggest issue with your code is that you don't do anything to avoid the same coordinates being visited multiple times. This means that the number of nodes you visit is guaranteed to grow exponentially, since it can keep going back and forth over the first few nodes many times.
The best way to avoid duplication is to maintain a set of the coordinates we've added to the queue (though if your node values are hashable, you might be able to add them directly to the set instead of coordinate tuples). Since we're doing a breadth-first search, we'll always reach a given coordinate by (one of) the shortest path(s), so we never need to worry about finding a better route later on.
Try something like this:
def get_distance(x1,y1,x2,y2):
neighbors = [(-1,0),(1,0),(0,-1),(0,1)]
nodes = [(square_pos[x1,y1],0)]
seen = set([(x1, y1)])
for node, path_length in nodes:
if path_length == 50:
break
if node.x == x2 and node.y == y2:
return path_length
for nx, ny in neighbors:
try:
square = square_pos[node.x + nx, node.y + ny]
if square.lightcycle == None and (square.x, square.y) not in seen:
nodes.append((square, path_length + 1))
seen.add((square.x, square.y))
except KeyError:
pass
return 50
I've also simplified the loop a bit. Rather than switching out the list after each depth, you can just use one loop and add to its end as you're iterating over the earlier values. I still abort if a path hasn't been found with fewer than 50 steps (using the distance stored in the 2-tuple, rather than the number of passes of the outer loop). A further improvement might be to use a collections.dequeue for the queue, since you could efficiently pop from one end while appending to the other end. It probably won't make a huge difference, but might avoid a little bit of memory usage.
I also avoided most of the indexing by one and zero in favor of unpacking into separate variable names in the for loops. I think this is much easier to read, and it avoids confusion since the two different kinds of 2-tuples had had different meanings (one is a node, distance tuple, the other is x, y).

How to find matching vertices in multiple maya meshes

I'm trying to compare the locations of vertices on one mesh to another and generate a list of paired vertices, (the ultimate purpose is to pair up vertices on a neck geo with the top verts of a body geo.)
The way I'm 'pairing' them is to just compare the distances between all vertices in both meshes and then match up the closest ones to eachother by ordering them in separate lists, (neck_geo_verts[0] is paired with body_geo_verts[0].)
I want to use OpenMaya as I've heard it considerably faster than cmds.xform.
Here's my code so far getting the verts, although it's using cmds and not the Maya API. I am having a really tough time finding what I need from the Maya documentation.
# The user selects an edge on both the bottom of the neck and top of the body, then this code gets all the vertices in an edge border on both of those geos and populates two lists with the vertices
import maya.cmds as mc
import maya.api.OpenMaya as om
import re
mc.unloadPlugin('testingPlugin.py')
mc.loadPlugin('testingPlugin.py')
def main():
geoOneVerts = []
geoTwoVerts = []
edges = cmds.ls(selection=True, sn=True)
geoOneEdgeNum = re.search(r"\[([0-9_]+)\]", edges[0])
geoTwoEdgeNum = re.search(r"\[([0-9_]+)\]", edges[1])
cmds.polySelect(add=True, edgeBorder=int(geoOneEdgeNum.group(1)))
geoOneEdgeBorder = cmds.ls(selection=True, sn=True)
geoOneEdgeVerts = cmds.polyInfo(edgeToVertex=True)
for vertex in geoOneEdgeVerts:
vertexPairNums = re.search(r":\s*([0-9_]+)\s*([0-9_]+)", vertex)
geoOneVerts.append(vertexPairNums.group(1))
geoOneVerts.append(vertexPairNums.group(2))
cmds.polySelect(replace=True, edgeBorder=int(geoTwoEdgeNum.group(1)))
geoTwoEdgeBorder = cmds.ls(selection=True, sn=True)
geoTwoEdgeVerts = cmds.polyInfo(edgeToVertex=True)
for vertex in geoTwoEdgeVerts:
vertexPairNums = re.search(r":\s*([0-9_]+)\s*([0-9_]+)", vertex)
geoTwoVerts.append(vertexPairNums.group(1))
geoTwoVerts.append(vertexPairNums.group(2))
geoOneVerts = list(set(geoOneVerts))
geoTwoVerts = list(set(geoTwoVerts))
# How do I use OpenMaya to compare the distance from the verts in both lists?
main()
EDIT: This code gives me two lists filled with the DAG names of vertices on two meshes. I'm unsure how to get the positions of those vertices to compare the distance between the vertices in both lists and I'm also unsure if I should be using maya.cmds for this as opposed to maya.api.OpenMaya considering the amount of vertices I'm going to be operating on.
EDIT2: Thanks to Theodox and hundreds of searches for the help. I ended up making a version that worked using boundary vertices and one that assumed paired vertices on both meshes would be in identical global space. Both of which I chose to use the Maya API and forewent Maya Commands completely for performance reasons.
Vesion1 (Using Boundary Verts):
import maya.OpenMaya as om
def main():
geo1Verts = om.MFloatPointArray()
geo2Verts = om.MFloatPointArray()
selectionList = om.MSelectionList()
om.MGlobal.getActiveSelectionList(selectionList)
geo1SeamVerts = getSeamVertsOn(selectionList, 1)
geo2SeamVerts = getSeamVertsOn(selectionList, 2)
pairedVertsDict = pairSeamVerts(geo1SeamVerts, geo2SeamVerts)
def getSeamVertsOn(objectList, objectNumber):
count = 0
indexPointDict = {}
selectedObject = om.MObject()
iter = om.MItSelectionList(objectList, om.MFn.kGeometric)
while not iter.isDone():
count += 1
connectedVerts = om.MIntArray()
if (count != objectNumber):
iter.next()
else:
iter.getDependNode(selectedObject)
vertexIter = om.MItMeshVertex(selectedObject)
while not vertexIter.isDone():
if (vertexIter.onBoundary()):
vertex = om.MPoint()
vertex = vertexIter.position()
indexPointDict[int(vertexIter.index())] = vertex
vertexIter.next()
return indexPointDict
def pairSeamVerts (dictSeamVerts1, dictSeamVerts2):
pairedVerts = {}
if (len(dictSeamVerts1) >= len(dictSeamVerts2)):
for vert1 in dictSeamVerts1:
distance = 0
closestDistance = 1000000
vertPair = 0
for vert2 in dictSeamVerts2:
distance = dictSeamVerts1[vert1].distanceTo(dictSeamVerts2[vert2])
if (distance < closestDistance):
closestDistance = distance
vertPair = vert2
pairedVerts[vert1] = vertPair
return (pairedVerts)
else:
for vert1 in dictSeamVerts2:
distance = 0
closestDistance = 1000000
vertPair = 0
for vert2 in dictSeamVerts1:
distance = dictSeamVerts2[vert1].distanceTo(dictSeamVerts1[vert2])
if (distance < closestDistance):
closestDistance = distance
vertPair = vert2
pairedVerts[vert1] = vertPair
return (pairedVerts)
main()
Version2 (Assuming Paired Vertices Would Share a Global Space):
import maya.OpenMaya as om
def main():
selectionList = om.MSelectionList()
om.MGlobal.getActiveSelectionList(selectionList)
meshOneVerts = getVertPositions(selectionList, 1)
meshTwoVerts = getVertPositions(selectionList, 2)
meshOneHashedPoints = hashPoints(meshOneVerts)
meshTwoHashedPoints = hashPoints(meshTwoVerts)
matchingVertList = set(meshOneHashedPoints).intersection(meshTwoHashedPoints)
pairedVertList = getPairIndices(meshOneHashedPoints, meshTwoHashedPoints, matchingVertList)
def getVertPositions(objectList, objectNumber):
count = 0
pointList = []
iter = om.MItSelectionList(objectList, om.MFn.kGeometric)
while not iter.isDone():
count = count + 1
if (count != objectNumber):
iter.next()
dagPath = om.MDagPath()
iter.getDagPath(dagPath)
mesh = om.MFnMesh(dagPath)
meshPoints = om.MPointArray()
mesh.getPoints(meshPoints, om.MSpace.kWorld)
for point in range(meshPoints.length()):
pointList.append([meshPoints[point][0], meshPoints[point][1], meshPoints[point][2]])
return pointList
def hashPoints(pointList):
_clamp = lambda p: hash(int(p * 10000) / 10000.00)
hashedPointList = []
for point in pointList:
hashedPointList.append(hash(tuple(map(_clamp, point))))
return (hashedPointList)
def getPairIndices(hashListOne, hashListTwo, matchingHashList):
pairedVertIndices = []
vertOneIndexList = []
vertTwoIndexList = []
for hash in matchingHashList:
vertListOne = []
vertListTwo = []
for hashOne in range(len(hashListOne)):
if (hashListOne[hashOne] == hash):
vertListOne.append(hashOne)
for hashTwo in range(len(hashListTwo)):
if (hashListTwo[hashTwo] == hash):
vertListTwo.append(hashTwo)
pairedVertIndices.append([vertListOne, vertListTwo])
return pairedVertIndices
main()
API is significantly faster for the distance comparison method, but in this case I think the real killer is likely to be the algorithm. Comparing every vert to ever other is a lot of math.
Probably the easiest thing to do is to come up with a way to hash the vertices instead: turn each xyz point into a single value that can be compared with others without doing the distances: two verts with the same hash would necessarily be in the same position. You can tweak the hash algorithm to quantize the vert positions a bit to account for floating point error at the same time.
Here's a way to hash a point (down to 4 significant digits, which you can tweak by changing the constant in _clamp) :
def point_hash(point):
'''
hash a tuple, probably a cmds vertex pos
'''
_clamp = lambda p: hash(int(p * 10000) / 10000.00)
return hash(tuple(map(_clamp, point)))
As long as both sets of verts are hashed in the same space (presumably world space) identical hashes will mean matched verts. All you'd have to do is to loop through each mesh, creating a dictionary which keyed the vertex hash to the vertex index. Here's a way to do it in cmds:
def vert_dict(obj):
'''
returns a dictionary of hash: index pairs representing the hashed verts of <obj>
'''
results = dict()
verts = cmds.xform(obj + ".vtx[*]", q=True, t=True, ws=True)
total = len(verts)/ 3
for v in range(total):
idx = v * 3
hsh = point_hash (verts[idx: idx + 3])
results[hsh] = v
return results
You can find the intersecting verts - the ones present in both meshes -- by intersecting the keys from both dictionaries. Then convert the matching verts in both meshes back to vertex indices by looking up in the two dictionaries.
Unless the meshes are really heavy, this should be doable without the API since all the work is in the hash function which has no API analog.
The only likely issue would be making sure that the verts were in the same space. You would have to fall back on a distance based strategy if you can't get the verts into the same space for some reason.
If you want to get a more useable result from the op's version 2 script (instead of returning nested and combined lists), you could do something like the following:
indices = lambda itr, val: (i for i, v in enumerate(itr) if v==val) #Get the index of each element of a list matching the given value.
matching = set(hashA).intersection(hashB)
return [i for h in matching
for i in zip(indices(hashA, h), indices(hashB, h))]
which will return a list of two element tuples representing the matched vertex pairs:
[(119, 69), (106, 56), (82, 32), (92, 42), ...
Also, you can use om.MSpace.kObject to compare the two mesh objects in local space depending on your specific needs.

Categories

Resources