Aim: To find out how many camera's can communicate and therefore in the same mesh. The tuples are camera coordinates and r is max distance they can be apart. n= amount of camera's
My code takes a list in this case numl and arranges it into pairs of coordinates based on whether or not the value of p <= r^2. Resulting in a list like lis = [ [(3,4),(5,6)] ,[(2,4),(5,9)], [(1,4),(5,6)] ]
I attempt to compare the values in the area i highlighted as my prob area and join any lists which had one tuple similar. In the example above for instance if any of the elements in lis[0] match any of the elements in the others then I would merge those to minus the duplicate and end up with [(3,4),(1,4),(5,6)] ,[(2,4),(5,9)]]
Can anyone help me to figure this out?
def smesh():
fin = []
i = 0
numl= [9,8,3,2,1,2,3,4,3,2]
r = numl.pop(0)
r = r**11
n = numl.pop(0)
flis = numl[::2]
numl.pop(0)
slis = numl[::2]
newl = list((zip(flis,slis)))
for x,y in itertools.combinations(newl,2):
p = (((x[0]-y[0])**2)+ ((x[1]-y[1])**2))
if p<=r:
fin.append([x,y])
<---below here is my prob area--->
for x,y in itertools.combinations(fin,2):
if x[0] in y:
fin.append(x+y)
elif x[1] in y:
fin.append(x+y)
print(fin)
Related
I have a list that has strings separated by commas. The values of each string are nothing but the navigation steps/action of the same procedure done by different users. I want to create coordinates for these steps/actions and store them for creating graph. Each unique steps/actions
will have one coordinate. My idea is I will consider a string with more steps first. I will assign them coordinates ranging from (1,0) to (n,0). Here first string will have 'y' as 0 saying all the actions will be in one layer. When i check for steps/actions in second string, if there are any missing ones i will assign them (1,1) to (n,1). So on... Care has to be taken that if first steps/actions of one string falls in between of another bigger string, the coordinates should be after that.
This sounds confusing, but in simple terms, i want to create coordinates for user flow of a website.
Assume list,
A = ['A___O___B___C___D___E___F___G___H___I___J___K___L___M___N',
'A___O___B___C___E___D___F___G___H___I___J___K___L___M___N',
'A___B___C___D___E___F___G___H___I___J___K___L___M___N',
'A___B___C___E___D___F___G___H___I___J___K___L___M___N',
'A___Q___C___D___E___F___G___H___I___J___K___L___M___N',
'E___P___F___G___H___I___J___K___L___M___N']
I started below code, but it is getting complicated. Any help is appreciated.
A1 = [i.split('___') for i in A]
# A1.sort(key=len, reverse=True)
A1 = sorted(A1, reverse=True)
if len(A1)>1:
Actions = {}
horizontalVal = {}
verticalVal = {}
restActions = []
for i in A1:
for j in i[1:]:
restActions.append(j)
for i in range (len(A1)):
if A1[i][0] not in restActions and A1[i][0] not in Actions.keys():
Actions[A1[i][0]] = [i,0]
horizontalVal[A1[i][0]] = i
verticalVal[A1[i][0]] = 0
unmarkedActions = []
for i in range(len(sortedLen)):
currLen = sortedLen[i]
for j in range(len(A1)):
if len(A1[j]) == currLen:
if j == 0:
for k in range(len(A1[j])):
currK = A1[j][k]
if currK not in Actions.keys():
Actions[currK] = [k,0]
horizontalVal[currK] = k
verticalVal[currK] = 0
else:
currHori = []
print(A1[j])
for k in range(len(A1[j])):
currK = A1[j][k]
.
. to be continued
I have a list of dictionaries. The key is item_cd and value is location_coordinates.
I would like to find the euclidian distance of these locations and create a separate similar list based on the proximity of their locations.
Steps
The initial distance will be calculated from the origin i.e. (0,0).
The closest item will be added to a separate list suppose it is {5036885955: [90.0, 61.73]} in this case.
Now I would like to find the closest order from the centroid of the previous orders. In this case, the centroid will be [90.0, 61.73] of order {5036885955: [90.0, 61.73]} as there is only one coordinate at the moment. The next closest order from the centroid of the previous order will be {5036885984: [86.0, 73.03]}
I would like to repeat the above steps until the list is empty
Input :
[{5036885850: [92.0, 88.73]}, {5036885955: [90.0, 61.73]}, {5036885984: [86.0, 73.03]}, {5036885998: [102.0, 77.54]}]
What I have tried :
def calculate_centroid(self, locations: list) -> list:
"""
get the centroid of the order
param: Location object
return: centroid of an order
"""
x, y = [p[0] for p in locations], [p[1] for p in locations]
centroid = [round(sum(x) / len(locations), 2), round(sum(y) / len(locations), 2)]
return centroid
def get_closest_order_to_origin(self, order_centroid_list: list) -> list:
"""
get the next closest order
"""
initial_order = [[0, 0]]
next_order = []
centroids = []
for order in order_centroid_list:
for shipping_req_key, centroid in order.items():
distance = math.sqrt(((initial_order[0][0] - centroid[0]) ** 2) + ((initial_order[0][1] - centroid[1]) ** 2))
I am not sure how should I proceed further. I would appreciate your feedback
You can use numpy.linalg.norm() to calculate the Euclidean distance between two numpy array. So you can declare those position arrays as a numpy array and apply this function to calculate distance.
Let's see this step by step.
Declared initial current_pos as origin (0,0). Extracted the points sequentially in the list named pos_list. Declared result_list as empty list to fetch results.
From the pos_list each positions can be taken and Euclidean distance with current_pos is calculated and stored in the dist_list.
The min(dist_list) gives the minimum distance min_dist. The corresponding index of min_dist can be fetched from dist_list and the relevant position and entry of the input data can be identified and processed (i.e. removing from data and appending to result_list)
The whole process continues until the data becomes empty.
The whole implementation:
import numpy as np
data = [{5036885850: [92.0, 88.73]}, {5036885955: [90.0, 61.73]}, {5036885984: [86.0, 73.03]}, {5036885998: [102.0, 77.54]}]
pos_list = [list(item.values())[0] for item in data]
current_pos = np.array([0,0])
result_list = []
while (len(data) != 0):
dist_list = [np.linalg.norm(current_pos - np.array(pos)) for pos in pos_list]
min_dist = min(dist_list)
item_index = dist_list.index(min_dist)
to_be_moved = data[item_index]
result_list.append(to_be_moved)
# current_pos = pos_list[item_index]
pos_list.remove(pos_list[item_index])
data.remove(to_be_moved)
print(result_list)
You're requirement was not clear in the question, so I'm handling two cases.
First case: Arrange the points in such an order such that the distances of each points from the origin is increasing. The above code works for this scenario and this matches your expected output.
Second case: Arrange the points a,b,c,d,... in such a way so that a is nearest to origin, b is nearest to a, c is nearest to d and so on. To see this case in action just revert the commented line.
NOTE: If any type of sort or positional modification is done, this solution will break.
UPDATE
According to your last added condition, you need to calculate the centroid of all the points in the resulting order, then find the order that is closest to the centroid.
You can pass the list of points that has been already added to the result find the centroid with the following function:
def get_centroid(lst):
arr = np.array(lst)
length = arr.shape[0]
if length == 0:
return 0,0
else:
sum_x = np.sum(arr[:, 0])
sum_y = np.sum(arr[:, 1])
return sum_x/float(length), sum_y/float(length)
to do so, keep a track of the point that is added to the final result and removed from the pos_list in another list called pos_removed_list:
pos_removed_list.append(pos_list[item_index])
and replace the commented line:
# current_pos = pos_list[item_index]
with
current_pos = np.array(get_centroid(pos_removed_list))
and you're ready to go!
**Full Code: **
import numpy as np
def get_centroid(lst):
arr = np.array(lst)
length = arr.shape[0]
if length == 0:
return 0,0
else:
sum_x = np.sum(arr[:, 0])
sum_y = np.sum(arr[:, 1])
return sum_x/float(length), sum_y/float(length)
data = [{5036885850: [92.0, 88.73]}, {5036885955: [90.0, 61.73]}, {5036885984: [86.0, 73.03]}, {5036885998: [102.0, 77.54]}]
pos_list = [list(item.values())[0] for item in data]
pos_removed_list = []
current_pos = np.array((0,0))
result_list = []
print('initial centroid: ' + str(current_pos))
while (len(data) != 0):
dist_list = [np.linalg.norm(current_pos - np.array(pos)) for pos in pos_list]
min_dist = min(dist_list)
item_index = dist_list.index(min_dist)
to_be_moved = data[item_index]
print('moved: ' + str(to_be_moved))
result_list.append(to_be_moved)
pos_removed_list.append(pos_list[item_index])
pos_list.remove(pos_list[item_index])
current_pos = np.array(get_centroid(pos_removed_list))
print('current centroid: ' + str(current_pos))
data.remove(to_be_moved)
print('\nfinal result: ' + str(result_list))
Your step by step output will be:
initial centroid: [0 0]
moved: {5036885955: [90.0, 61.73]}
current centroid: [90. 61.73]
moved: {5036885984: [86.0, 73.03]}
current centroid: [88. 67.38]
moved: {5036885998: [102.0, 77.54]}
current centroid: [92.66666667 70.76666667]
moved: {5036885850: [92.0, 88.73]}
current centroid: [92.5 75.2575]
final result: [{5036885955: [90.0, 61.73]}, {5036885984: [86.0, 73.03]}, {5036885998: [102.0, 77.54]}, {5036885850: [92.0, 88.73]}]
Let me know if this is what you were looking for
Using the key argument to the min function can help keep our code efficient and clean. In addition we can help keep track of everything with a few functions that use type aliases to annotate their inputs and outputs. We can calculate the new centroid in constant time from the old centroid as long as we keep track of how many points are involved in calculating the old centroid. Finally we can minimize distance squared to the centroid instead of distance and save ourselves calculating square roots.
from typing import Callable, cast, Dict, List, Tuple
Point = Tuple[float, float]
Order = Dict[int, List[float]]
orders: List[Order] = [
{5036885850: [92.0, 88.73]},
{5036885955: [90.0, 61.73]},
{5036885984: [86.0, 73.03]},
{5036885998: [102.0, 77.54]}
]
def new_centroid(new_point: Point, old_centroid: Point, old_centroid_weight: float) -> Point:
new_x, new_y = new_point
old_x, old_y = old_centroid
new_point_weight = 1 / (old_centroid_weight + 1)
return (
new_point_weight * new_x + (1 - new_point_weight) * old_x,
new_point_weight * new_y + (1 - new_point_weight) * old_y,
)
def distance_squared_to(point: Point) -> Callable[[Order], float]:
def distance_squared_to_point(order: dict) -> float:
order_point = get_point(order)
from_x, from_y = point
order_x, order_y = order_point
return (from_x - order_x) ** 2 + (from_y - order_y) ** 2
return distance_squared_to_point
def get_point(order: Order) -> Point:
return cast(Point, list(order.values()).pop())
sorted_orders = []
centroid = (0.0, 0.0)
centroid_weight = 0
while orders:
next_order = min(orders, key=distance_squared_to(centroid))
orders.remove(next_order)
sorted_orders.append(next_order)
next_point = get_point(next_order)
centroid = new_centroid(next_point, centroid, centroid_weight)
centroid_weight += 1
print(sorted_orders)
A couple notes here. Unfortunately, Python doesn't have a really nice idiom for removing the minimum element from a collection. We have to find the min (order n) and then call the remove on that element (order n again). We could use something like numpy.argmin but that's probably overkill and a pretty heavy dependency for this.
The second note is that our Orders are kinda hard to work with. To get the location, we have to cast the values as a list and pop an element (or just take the first). Given an order we can't index into it without introspecting to find the key. It might be easier if the orders were just a tuple where the first element was the order number and the second was the location. Any number of other forms could also be helpful (named tuple, data class, dictionary with keys item_cd and location_coordinates, ...). Location coordinates could also be just a tuple instead of a list, but that's more of a type hinting issue than anything else.
I have the following situation:
(1) I have a large grid. By some conditions I want to further observe specific points/cells in this grid. Each cell has an ID and coordinates X, Y seperately. So in this case lets observe one cell only - marked C on the image, that is located on the edge of the grid. By some formula I can get all the neighbouring cells of the first order (marked 1 on the image) and the second order (marked 2 on the image).
(2) With a further condition I identify some cells in the neighbouring cells and are marked in orange on the second image. What I want to do is to connect all orange cells with each other by optimizing the distances and takih into account only min() distances. My first attempt was to observe cells only by calculating the distances to cells of the lower order. So when looking at cells in neighbours cells 2, i'm looking at the cells in 1 only. The solution of connections is presented on image 2, but it's not optimal, since the ideal solution would compare the distances of all cells and not only of the cells of the lower neighbour order. By doing this, i'm getting the situation presented on image 3. And the problem is that the cells are of course not connected to the centre. What to do?
The current code is:
CO - list of centre points.
data - df all all ID's with X,Y values
CO_list = CO['ID'].tolist()
neighbor100 = []
for p in IskanjeCO_list:
d = get_neighbors100k2(p, len(data)) #function that finds the ID's of neighbours of the first order
neighbor100.append(d)
neighbor200 = []
for p in IskanjeCO_list:
d = get_neighbors200k2(p, len(data)) #function that finds the ID's of neighbours of the second order
neighbor200.append(d)
flat100 = []
for i in neighbor100:
for j in i:
flat100.append(j)
flat200 = []
for i in neighbor200:
for j in i:
flat200.append(j)
neighbors100 = flat100
neighbors200 = flat200
data_sosedi100 = data.iloc[flat100,].reset_index(drop=True)
data_sosedi200 = data.iloc[flat200,].reset_index(drop=True)
dist200 = []
for b in flat200:
d = ((pd.DataFrame((data_sosedi100['X']* - data.iloc[b,]['X'])**2
+ (data_sosedi100['Y'] - data.iloc[b,]['Y'])**2 )**0.5)).sum(1)
dist200.append(d.min())
data_sosedi200['dist'] = dist200
data_sosedi200['id'] = None
for e in CO_list:
data_sosedi200.loc[data_sosedi200['FID_2'].isin((get_neighbors200k2(e, len(data)))),'id'] = e
Do you have any suggestion how to optimize this a bit further? I hope i presented the whole image. If needed, I'll clarify further. If you see a part of the code, where i'd be able to furher optimize this loop, i'd be very grateful!
I defined the points manually to work with:
import numpy as np
from operator import itemgetter, attrgetter
nodes = [[-2,1], [-2,0], [-1,0], [0,0], [1,1], [2,1], [2,0], [1,2], [2,2]]
center = [0,0]
def find_neighbor(node):
n=[]
for i in range(-1,2):
for j in range(-1,2):
if not (i ==0 and j ==0):
n.append([node[0]+i,node[1]+j])
return [N for N in n if N in nodes]
def distance_to_center(node):
return np.sqrt(node[0]**2+node[1]**2)
def distance_between_two_nodes(node1, node2):
return np.sqrt((node1[0]-node2[0])**2+(node1[1]-node2[1])**2)
def next_node_closest_to_center(node):
min = distance_to_center(node)
next_node = node
for n in find_neighbor(node):
if distance_to_center(n) < min:
min = distance_to_center(n)
next_node = n
return next_node, min
def get_path_to_center(node):
node_path = [node]
distance = 0.
while node!= center:
new_node = next_node_closest_to_center(node)[0]
distance += distance_between_two_nodes(node, new_node)
node_path.append(new_node)
node=new_node
return node_path,distance
def furthest_nodes_from_center(nodes):
max = 0.
for n in nodes:
if get_path_to_center(n)[1] > max:
furthest_nodes_pathwise = []
max = get_path_to_center(n)[1]
furthest_nodes_pathwise.append(n)
elif get_path_to_center(n)[1] == max:
furthest_nodes_pathwise.append(n)
return furthest_nodes_pathwise
def farthest_node_from_center(nodes):
max = 0.
farthest_node = center
for n in nodes:
if distance_to_center(n) > max:
max = distance_to_center(n)
farthest_node = n
return farthest_node
def closest_node_to_center(nodes):
min = distance_to_center(farthest_node_from_center(nodes))
for n in nodes:
if distance_to_center(n) < min:
min = distance_to_center(n)
closest_node = n
return closest_node
def closest_node_center_with_furthest_distance(node_selection):
if len(node_selection) == 1:
return node_selection[0]
else:
return closest_node_to_center(node_selection)
print(closest_node_center_with_furthest_distance(furthest_nodes_from_center(nodes)))
Output:
[2, 0]
[Finished in 0.266s]
By running on all nodes I can now determine that the furthest node away path-wise but still closest to the center distance wise is [2,0] and not [2,2]. So we start from there. To find the one on the other side just split the data like I said into negative x values and positive. if you run it over a list of only the negative x value cells you will get [-2,1]
Now that you have your 2 starting cells [2,0] and [-2,1] I will leave you to figure out the algorithm to navigate to the center passing by all cells using the steps in my comments (you can now skip step 1 because this is the answer posted)
I have two 2D lists with x and y coordinates and I want to go through list1, and for each point find the closest (x, y) coordinates in list2. They're of different lengths, and it's okay if I don't use all the points of list2 or even if I reuse points, as long as I go through all the points only once in list1. I need the shift itself as well as the location in the lists of both points. Here's what I have done to find the shift:
s_x = ([1.0,2.0,3.0])
s_y = ([1.5,2.5,3.5])
SDSS_x = ([3.0,4.0,5.0])
SDSS_y = ([3.5,4.5,5.5])
list1 = zip(s_x,s_y)
list2 = zip(SDSS_x,SDSS_y)
shift = []
place_in_array = []
for num,val in enumerate(list1):
guess = 9999999999999.0
place_guess = 0
for index,line in enumerate(list2):
new_guess = math.hypot(line[0] - val[0], line[1] - val[1])
if new_guess < guess:
guess = new_guess
place_guess = index
shift.append(guess)
place_in_array.append(place_guess)
print shift
print place_in_array
but the output is this:
[2.8284271247461903, 1.4142135623730951, 0.0]
[0, 0, 0]
These are wrong, and I can't figure out what the problem is.
There are specialized data structures for these geometric queries, with implementations that are debugged and efficient. Why not use them? sklearn.neighbors.BallTree can do these types of queries, as well as sklearn.neighbors.KDTree.
def lazy_dist(p0):
return lambda p1:(p0[0] -p1[0])**2 + (p0[1] - p1[1])**2
closest_matches = {p0:min(list2,key=lazy_dist(p0)) for p0 in list1}
I think will do what you want (not very fast but it should work
I'm trying to compare the locations of vertices on one mesh to another and generate a list of paired vertices, (the ultimate purpose is to pair up vertices on a neck geo with the top verts of a body geo.)
The way I'm 'pairing' them is to just compare the distances between all vertices in both meshes and then match up the closest ones to eachother by ordering them in separate lists, (neck_geo_verts[0] is paired with body_geo_verts[0].)
I want to use OpenMaya as I've heard it considerably faster than cmds.xform.
Here's my code so far getting the verts, although it's using cmds and not the Maya API. I am having a really tough time finding what I need from the Maya documentation.
# The user selects an edge on both the bottom of the neck and top of the body, then this code gets all the vertices in an edge border on both of those geos and populates two lists with the vertices
import maya.cmds as mc
import maya.api.OpenMaya as om
import re
mc.unloadPlugin('testingPlugin.py')
mc.loadPlugin('testingPlugin.py')
def main():
geoOneVerts = []
geoTwoVerts = []
edges = cmds.ls(selection=True, sn=True)
geoOneEdgeNum = re.search(r"\[([0-9_]+)\]", edges[0])
geoTwoEdgeNum = re.search(r"\[([0-9_]+)\]", edges[1])
cmds.polySelect(add=True, edgeBorder=int(geoOneEdgeNum.group(1)))
geoOneEdgeBorder = cmds.ls(selection=True, sn=True)
geoOneEdgeVerts = cmds.polyInfo(edgeToVertex=True)
for vertex in geoOneEdgeVerts:
vertexPairNums = re.search(r":\s*([0-9_]+)\s*([0-9_]+)", vertex)
geoOneVerts.append(vertexPairNums.group(1))
geoOneVerts.append(vertexPairNums.group(2))
cmds.polySelect(replace=True, edgeBorder=int(geoTwoEdgeNum.group(1)))
geoTwoEdgeBorder = cmds.ls(selection=True, sn=True)
geoTwoEdgeVerts = cmds.polyInfo(edgeToVertex=True)
for vertex in geoTwoEdgeVerts:
vertexPairNums = re.search(r":\s*([0-9_]+)\s*([0-9_]+)", vertex)
geoTwoVerts.append(vertexPairNums.group(1))
geoTwoVerts.append(vertexPairNums.group(2))
geoOneVerts = list(set(geoOneVerts))
geoTwoVerts = list(set(geoTwoVerts))
# How do I use OpenMaya to compare the distance from the verts in both lists?
main()
EDIT: This code gives me two lists filled with the DAG names of vertices on two meshes. I'm unsure how to get the positions of those vertices to compare the distance between the vertices in both lists and I'm also unsure if I should be using maya.cmds for this as opposed to maya.api.OpenMaya considering the amount of vertices I'm going to be operating on.
EDIT2: Thanks to Theodox and hundreds of searches for the help. I ended up making a version that worked using boundary vertices and one that assumed paired vertices on both meshes would be in identical global space. Both of which I chose to use the Maya API and forewent Maya Commands completely for performance reasons.
Vesion1 (Using Boundary Verts):
import maya.OpenMaya as om
def main():
geo1Verts = om.MFloatPointArray()
geo2Verts = om.MFloatPointArray()
selectionList = om.MSelectionList()
om.MGlobal.getActiveSelectionList(selectionList)
geo1SeamVerts = getSeamVertsOn(selectionList, 1)
geo2SeamVerts = getSeamVertsOn(selectionList, 2)
pairedVertsDict = pairSeamVerts(geo1SeamVerts, geo2SeamVerts)
def getSeamVertsOn(objectList, objectNumber):
count = 0
indexPointDict = {}
selectedObject = om.MObject()
iter = om.MItSelectionList(objectList, om.MFn.kGeometric)
while not iter.isDone():
count += 1
connectedVerts = om.MIntArray()
if (count != objectNumber):
iter.next()
else:
iter.getDependNode(selectedObject)
vertexIter = om.MItMeshVertex(selectedObject)
while not vertexIter.isDone():
if (vertexIter.onBoundary()):
vertex = om.MPoint()
vertex = vertexIter.position()
indexPointDict[int(vertexIter.index())] = vertex
vertexIter.next()
return indexPointDict
def pairSeamVerts (dictSeamVerts1, dictSeamVerts2):
pairedVerts = {}
if (len(dictSeamVerts1) >= len(dictSeamVerts2)):
for vert1 in dictSeamVerts1:
distance = 0
closestDistance = 1000000
vertPair = 0
for vert2 in dictSeamVerts2:
distance = dictSeamVerts1[vert1].distanceTo(dictSeamVerts2[vert2])
if (distance < closestDistance):
closestDistance = distance
vertPair = vert2
pairedVerts[vert1] = vertPair
return (pairedVerts)
else:
for vert1 in dictSeamVerts2:
distance = 0
closestDistance = 1000000
vertPair = 0
for vert2 in dictSeamVerts1:
distance = dictSeamVerts2[vert1].distanceTo(dictSeamVerts1[vert2])
if (distance < closestDistance):
closestDistance = distance
vertPair = vert2
pairedVerts[vert1] = vertPair
return (pairedVerts)
main()
Version2 (Assuming Paired Vertices Would Share a Global Space):
import maya.OpenMaya as om
def main():
selectionList = om.MSelectionList()
om.MGlobal.getActiveSelectionList(selectionList)
meshOneVerts = getVertPositions(selectionList, 1)
meshTwoVerts = getVertPositions(selectionList, 2)
meshOneHashedPoints = hashPoints(meshOneVerts)
meshTwoHashedPoints = hashPoints(meshTwoVerts)
matchingVertList = set(meshOneHashedPoints).intersection(meshTwoHashedPoints)
pairedVertList = getPairIndices(meshOneHashedPoints, meshTwoHashedPoints, matchingVertList)
def getVertPositions(objectList, objectNumber):
count = 0
pointList = []
iter = om.MItSelectionList(objectList, om.MFn.kGeometric)
while not iter.isDone():
count = count + 1
if (count != objectNumber):
iter.next()
dagPath = om.MDagPath()
iter.getDagPath(dagPath)
mesh = om.MFnMesh(dagPath)
meshPoints = om.MPointArray()
mesh.getPoints(meshPoints, om.MSpace.kWorld)
for point in range(meshPoints.length()):
pointList.append([meshPoints[point][0], meshPoints[point][1], meshPoints[point][2]])
return pointList
def hashPoints(pointList):
_clamp = lambda p: hash(int(p * 10000) / 10000.00)
hashedPointList = []
for point in pointList:
hashedPointList.append(hash(tuple(map(_clamp, point))))
return (hashedPointList)
def getPairIndices(hashListOne, hashListTwo, matchingHashList):
pairedVertIndices = []
vertOneIndexList = []
vertTwoIndexList = []
for hash in matchingHashList:
vertListOne = []
vertListTwo = []
for hashOne in range(len(hashListOne)):
if (hashListOne[hashOne] == hash):
vertListOne.append(hashOne)
for hashTwo in range(len(hashListTwo)):
if (hashListTwo[hashTwo] == hash):
vertListTwo.append(hashTwo)
pairedVertIndices.append([vertListOne, vertListTwo])
return pairedVertIndices
main()
API is significantly faster for the distance comparison method, but in this case I think the real killer is likely to be the algorithm. Comparing every vert to ever other is a lot of math.
Probably the easiest thing to do is to come up with a way to hash the vertices instead: turn each xyz point into a single value that can be compared with others without doing the distances: two verts with the same hash would necessarily be in the same position. You can tweak the hash algorithm to quantize the vert positions a bit to account for floating point error at the same time.
Here's a way to hash a point (down to 4 significant digits, which you can tweak by changing the constant in _clamp) :
def point_hash(point):
'''
hash a tuple, probably a cmds vertex pos
'''
_clamp = lambda p: hash(int(p * 10000) / 10000.00)
return hash(tuple(map(_clamp, point)))
As long as both sets of verts are hashed in the same space (presumably world space) identical hashes will mean matched verts. All you'd have to do is to loop through each mesh, creating a dictionary which keyed the vertex hash to the vertex index. Here's a way to do it in cmds:
def vert_dict(obj):
'''
returns a dictionary of hash: index pairs representing the hashed verts of <obj>
'''
results = dict()
verts = cmds.xform(obj + ".vtx[*]", q=True, t=True, ws=True)
total = len(verts)/ 3
for v in range(total):
idx = v * 3
hsh = point_hash (verts[idx: idx + 3])
results[hsh] = v
return results
You can find the intersecting verts - the ones present in both meshes -- by intersecting the keys from both dictionaries. Then convert the matching verts in both meshes back to vertex indices by looking up in the two dictionaries.
Unless the meshes are really heavy, this should be doable without the API since all the work is in the hash function which has no API analog.
The only likely issue would be making sure that the verts were in the same space. You would have to fall back on a distance based strategy if you can't get the verts into the same space for some reason.
If you want to get a more useable result from the op's version 2 script (instead of returning nested and combined lists), you could do something like the following:
indices = lambda itr, val: (i for i, v in enumerate(itr) if v==val) #Get the index of each element of a list matching the given value.
matching = set(hashA).intersection(hashB)
return [i for h in matching
for i in zip(indices(hashA, h), indices(hashB, h))]
which will return a list of two element tuples representing the matched vertex pairs:
[(119, 69), (106, 56), (82, 32), (92, 42), ...
Also, you can use om.MSpace.kObject to compare the two mesh objects in local space depending on your specific needs.