Can't implement GJK distance algorithm - python

I'm trying to design my own physics engine from scratch, as well as the vector/matrix libraries.
Everything worked beautifully so far, until I tried to implement collision detection in my library. First with SAT, worked great for detecting, but I wanted to find the distance between the objects as well. Then I tried to implement the GJK distance algorithm, just to see if I can find the distance between the origin and a polygon. But it just doesn't work, the smallest distance perceived by the algorithm that I implemented was one of the vertex of the polygon:
I know I made the other libraries from scratch, but I'm positive that they are working. Anyways, here's the code where I've implemented the GJK:
#objectL[0] is a hexagon
v = objectL[0].nodes[0]
W = []
u = 0
close_enough = False
while not close_enough and v != Vector(0,0):
w = objectL[0].support(-v)
d = v*w/abs(v) #*:dot product abs:magnitude
u = max(u,d)
close_enough = abs(v) - u <= 0.0001
if not close_enough:
W.append(w)
while len(W)>2:
del W[0]
v = Vector(0,0).vectorToLine(*W) #distance from the origin to the simplex
#formed by W
And now the support method:
def support(self,axis):
maxP = self.nodes[0]*axis #dot product of first vertex with the axis
n = self.nodes[0]
for node in self.nodes[1:]:
p = node*axis
if p>maxP:
maxP = p
n = node
return node
Those are the code snippets, that I think is where the error is, but I can't find it. The GJK algorithm I've copied from here. Thanks!
Edit:
Here is my project(implemented in pygame)

Ok, found the error. Which wasn't on the implementation, but rather on the functions that I've previously made: the support, which returned node instead of n and the vectorToLine function which returned an incorrect vector(negative value).
Also for those that are reading this post years from now, and trying to implement this algorithm, please note that I only changed the while len(W)>2 part to:
while len(W)>2:
maxD = 0
for w in W:
if abs(w)>maxD:
maxD = w
W.remove(maxD)
Which removes the farthest point of the simplex/triangle, so it gets the two closest point(to the origin) to continue the algorithm.

Related

Improving BFS performance with some kind of memoization

I have this issue that I'm trying to build an algorithm which will find distances from one vertice to others in graph.
Let's say with the really simple example that my network looks like this:
network = [[0,1,2],[2,3,4],[4,5,6],[6,7]]
I created a BFS code which is supposed to find length of paths from the specified source to other graph's vertices
from itertools import chain
import numpy as np
n = 8
graph = {}
for i in range(0, n):
graph[i] = []
for communes in communities2:
for vertice in communes:
work = communes.copy()
work.remove(vertice)
graph[vertice].append(work)
for k, v in graph.items():
graph[k] = list(chain(*v))
def bsf3(graph, s):
matrix = np.zeros([n,n])
dist = {}
visited = []
queue = [s]
dist[s] = 0
visited.append(s)
matrix[s][s] = 0
while queue:
v = queue.pop(0)
for neighbour in graph[v]:
if neighbour in visited:
pass
else:
matrix[s][neighbour] = matrix[s][v] + 1
queue.append(neighbour)
visited.append(neighbour)
return matrix
bsf3(graph,2)
First I'm creating graph (dictionary) and than use the function to find distances.
What I'm concerned about is that this approach doesn't work with larger networks (let's say with 1000 people in there). And what I'm thinking about is to use some kind of memoization (actually that's why I made a matrix instead of list). The idea is that when the algorithm calculates the path from let's say 0 to 3 (what it does already) it should keep track for another routes in such a way that matrix[1][3] = 1 etc.
So I would use the function like bsf3(graph, 1) it would not calculate everything from scratch, but would be able to access some values from matrix.
Thanks in advance!
Knowing this not fully answer your question, but this is another approach you cabn try.
In networks you will have a routing table for each node inside your network. You simple save a list of all nodes inside the network and in which node you have to go. Example of routing table of node D
A -> B
B -> B
C -> E
D -> D
E -> E
You need to run BFS on each node to build all routing table and it will take O(|V|*(|V|+|E|). The space complexity is quadratic but you have to check all possible paths.
When you create all this information you can simple start from a node and search for your destination node inside the table and find the next node to go. This will give a more better time complexity (if you use the right data structure for the table).

Random ultrametric trees

I've implemented a program on python which generates random binary trees. So now I'd like to assign to each internal node of the tree a distance to make it ultrametric. Then, the distance between the root and any leaves must be the same. If a node is a leaf then the distance is null. Here is a node :
class Node() :
def __init__(self, G = None , D = None) :
self.id = ""
self.distG = 0
self.distD = 0
self.G = G
self.D = D
self.parent = None
My idea is to set the distance h at the beginning and to decrease it as an internal node is found but its working only on the left side.
def lgBrancheRand(self, h) :
self.distD = h
self.distG = h
hrandomD = round(np.random.uniform(0,h),3)
hrandomG = round(np.random.uniform(0,h),3)
if self.D.D is not None :
self.D.distD = hrandomD
self.distD = round(h-hrandomD,3)
lgBrancheRand(self.D,hrandomD)
if self.G.G is not None :
self.G.distG = hrandomG
self.distG = round(h-hrandomG,3)
lgBrancheRand(self.G,hrandomG)
In summary, you would create random matrices and apply UPGMA to each.
More complete answer below
Simply use the UPGMA algorithm. This is a clustering algorithm used to resolve a pairwise matrix.
You take the total genetic distance between two pairs of "taxa" (technically OTUs) and divide it by two. You assign the closest members of the pairwise matrix as the first 'node'. Reformat the matrix so these two pairs are combined into a single group ('removed') and find the next 'nearest neighbor' ad infinitum. I suspect R 'ape' will have a ultrametric algorhithm which will save you from programming. I see that you are using Python, so BioPython MIGHT have this (big MIGHT), personally I would pipe this through a precompiled C program and collect the results via paup that sort of thing. I'm not going to write code, because I prefer Perl and get flamed if any Perl code appears in a Python question (the Empire has established).
Anyway you will find this algorhithm produces a perfect ultrametric tree. Purests do not like ultrametric trees derived throught this sort of algorithm. However, in your calculation it could be useful because you could find the phylogeny from real data , which is most "clock-like" against the null distribution you are producing. In this context it would be cool.
You might prefer to raise the question on bioinformatics stackexchange.

RDSA implementation on sage

First of all I must say my knowledge with using Sage math is really very limited, but I really want to improve an to be able to solve these problems I am having. I have been asked to implement the following:
1 - Read in FIPS 186-4 (http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-4.pdf) the definition of ECDSA and implement using Sage math with:
(a) prime eliptic curves (P-xxx)
(b) binary eliptic curves (B-xxx)
I tried solving (a) by looking around the internet a lot and ended up with the following code:
Exercise 1, a)
class ECDSA_a:
def __init__(self):
#Parameters for Curve p-256 as stated on FIPS 186-4 D1.2.3
p256 = 115792089210356248762697446949407573530086143415290314195533631308867097853951
a256 = p256 - 3
b256 = ZZ("5ac635d8aa3a93e7b3ebbd55769886bc651d06b0cc53b0f63bce3c3e27d2604b", 16)
## base point values
gx = ZZ("6b17d1f2e12c4247f8bce6e563a440f277037d812deb33a0f4a13945d898c296", 16)
gy = ZZ("4fe342e2fe1a7f9b8ee7eb4a7c0f9e162bce33576b315ececbb6406837bf51f5", 16)
self.F = GF(p256)
self.C = EllipticCurve ([self.F(a256), self.F(b256)])
self.G = self.C(self.F(gx), self.F(gy))
self.N = FiniteField (self.C.order()) # how many points are in our curve
self.d = int(self.F.random_element()) # privateKey
self.pd = self.G*self.d # our pubkey
self.e = int(self.N.random_element()) # our message
#sign
def sign(self):
self.k = self.N.random_element()
self.r = (int(self.k)*self.G).xy()[0]
self.s = (1/self.k)*(self.e+self.N(self.r)*self.d)
#verify
def verify(self):
self.w = 1/self.N(self.s)
return self.r == (int(self.w*self.e)*self.G + int(self.N(self.r)*self.w)*self.pd).xy()[0]
#mutate
def mutate(self):
s2 = self.N(self.s)*self.N(-1)
if not (s2 != self.s) : return False
self.w = 1/s2
return self.r == (int(self.w*self.e)*self.G + int(self.N(self.r)*self.w)*self.pd).xy()[0] # sign flip mutant
#TESTING
#Exercise 1 a)
print("Exercise 1 a)\n")
print("Elliptic Curve defined by y^2 = x^3 -3x +b256*(mod p256)\n")
E = ECDSA_a()
E.sign()
print("Verify signature = {}".format(E.verify()))
print("Mutating = {}".format(E.mutate()))
But now I wonder, Is this code really what I have been asked for?
I mean, I got the values for p and all that from the link mentioned above.
But is this eliptic curve I made a prime one? (whatever that really means).
In order words is this code I glued together the answer? And what is the mutate function actually doing? I understand the rest but don't see why it needs to be here...
Also, what could I do about question (b)? I have looked all around the internet but I can't find a single understandable mention about binary eliptic curves in sage...
Could I just reuse the above code and simply change the curve creation to get the answer?
(a.) Is this code really what I have been asked for?
No.
The sign() method has the wrong signature: it does not accept an argument to sign.
It would be very helpful to write unit tests for your code based on published test vectors, perhaps these, cf this Secp256k1 ECDSA test examples question.
You might consider doing the verification methods described in D.5 & D.6 (pp 109 ff).
(b.) binary elliptic curves
The FIPS publication you cited offers some advice on implementing such curves, and yes you could leverage your current code. But there's probably less practical advantage to implementing them, compared to the P-xxx curves, as strength of B-xxx curves is on rockier footing. They have an advantage for hardware implementations such as FPGA, but that's not relevant for your situation.

Batch-constraining objects (feathers to a wing)

really not long ago I had my first dumb question answered here so... there I am again, with a hopefully less dumb and more interesting headscratcher. Keep in my mind I am still making my baby steps in scripting !
There it is : I need to rig a feathered wing, and I already have all the feathers in place. I thought of mimicking another rig I animated recently that had the feathers point-constrained to the arm and forearm, and orient-constrained to three other controllers on the arm : each and every feather was constrained to two of those controllers at a time, and the constraint's weights would shift as you went down the forearm towards the wrist, so that one feather perfectly at mid-distance between the elbow and the forearm would be equally constrained by both controllers... you get the picture.
My reasoning was as follows : let's make a loop that iterates over every feather, gets its world position, finds the distance from that feather to each of the orient controllers (through Pythagoras), normalize that and feed the values into the weight attribute of an orient constraint. I could even go the extra mile and pass the normalized distance through a sine function to get a nice easing into the feathers' silhouette.
My pseudo-code is ugly and broken, but it's a try. My issues are inlined.
Second try !
It works now, but only on active object, instead of the whole selection. What could be happening ?
import maya.cmds as cmds
# find world space position of targets
base_pos = cmds.xform('base',q=1,ws=1,rp=1)
tip_pos = cmds.xform('tip',q=1,ws=1,rp=1)
def relative_dist_from_pos(pos, ref):
# vector substract to get relative pos
pos_from_ref = [m - n for m, n in zip(pos, ref)]
# pythagoras to get distance from vector
dist_from_ref = (pos_from_ref[0]**2 + pos_from_ref[1]**2 + pos_from_ref[2]**2)**.5
return dist_from_ref
def weight_from_dist(dist_from_base, dist_to_tip):
normalize_fac = (1/(dist_from_base + dist_to_tip))
dist_from_base *= normalize_fac
dist_to_tip *= normalize_fac
return dist_from_base, dist_to_tip
sel = cmds.ls(selection=True)
for obj in sel:
# find world space pos of feather
feather_pos = cmds.xform(obj, q=1, ws=1, rp=1)
# call relative_dist_from_pos
dist_from_base = relative_dist_from_pos(feather_pos, base_pos)
dist_to_tip = relative_dist_from_pos(feather_pos, tip_pos)
# normalize distances
weight_from_dist(dist_from_base, dist_to_tip)
# constrain the feather - weights are inverted
# because the smaller the distance, the stronger the constraint
cmds.orientConstraint('base', obj, w=dist_to_tip)
cmds.orientConstraint('tip', obj, w=dist_from_base)
There you are. Any pointers are appreciated.
Have a good night,
Hadriscus

Algorithm Is Node A Connected to Node B in Graph

I am looking for an algorithm to check for any valid connection (shortest or longest) between two arbitrary nodes on a graph.
My graph is fixed to a grid with logical (x, y) coordinates with north/south/east/west connections, but nodes can be removed randomly so you can't assume that taking the edge with coords closest to the target is always going to get you there.
The code is in python. The data structure is each node (object) has a list of connected nodes. The list elements are object refs, so we can then search that node's list of connected nodes recursively, like this:
for pnode in self.connected_nodes:
for cnode in pnode.connected_nodes:
...etc
I've included a diagram showing how the nodes map to x,y coords and how they are connected in north/east/south/west. Sometimes there are missing nodes (i.e between J and K), and sometimes there are missing edges (i.e between G and H). The presence of nodes and edges is in flux (although when we run the algorithm, it is taking a fixed snapshot in time), and can only be determined by checking each node for it's list of connected nodes.
The algorithm needs to yield a simple true/false to whether there is a valid connection between two nodes. Recursing through every list of connected nodes explodes the number of operations required - if the node is n edges away, it requires at most 4^n operations. My understanding is something like Dijistrka's algorithm works by finding the shortest path based on edge weights, but if there is no connection at all then would it still work?
For some background, I am using this to model 2D destructible objects. Each node represents a chunk of the material, and if one or more nodes do not have a connection to the rest of the material then it should separate off. In the diagram - D, H, R - should pare off from the main body as they are not connected.
UPDATE:
Although many of the posted answers might well work, DFS is quick, easy and very appropriate. I'm not keen on the idea of sticking extra edges between nodes with high value weights to use Dijkstra because node's themselves might disappear as well as edges. The SSC method seems more appropriate for distinguishing between strong and weakly connected graph sections, which in my graph would work if there was a single edge between G and H.
Here is my experiment code for DFS search, which creates the same graph as shown in the diagram.
class node(object):
def __init__(self, id):
self.connected_nodes = []
self.id = id
def dfs_is_connected(self, node):
# Initialise our stack and our discovered list
stack = []
discovered = []
# Declare operations count to track how many iterations it took
op_count = 0
# Push this node to the stack, for our starting point
stack.append(self)
# Keeping iterating while the stack isn't empty
while stack:
# Pop top element off the stack
current_node = stack.pop()
# Is this the droid/node you are looking for?
if current_node.id == node.id:
# Stop!
return True, op_count
# Check if current node has not been discovered
if current_node not in discovered:
# Increment op count
op_count += 1
# Is this the droid/node you are looking for?
if current_node.id == node.id:
# Stop!
return True, op_count
# Put this node in the discovered list
discovered.append(current_node)
# Iterate through all connected nodes of the current node
for connected_node in current_node.connected_nodes:
# Push this connected node into the stack
stack.append(connected_node)
# Couldn't find the node, return false. Sorry bud
return False, op_count
if __name__ == "__main__":
# Initialise all nodes
a = node('a')
b = node('b')
c = node('c')
d = node('d')
e = node('e')
f = node('f')
g = node('g')
h = node('h')
j = node('j')
k = node('k')
l = node('l')
m = node('m')
n = node('n')
p = node('p')
q = node('q')
r = node('r')
s = node('s')
# Connect up nodes
a.connected_nodes.extend([b, e])
b.connected_nodes.extend([a, f, c])
c.connected_nodes.extend([b, g])
d.connected_nodes.extend([r])
e.connected_nodes.extend([a, f, j])
f.connected_nodes.extend([e, b, g])
g.connected_nodes.extend([c, f, k])
h.connected_nodes.extend([r])
j.connected_nodes.extend([e, l])
k.connected_nodes.extend([g, n])
l.connected_nodes.extend([j, m, s])
m.connected_nodes.extend([l, p, n])
n.connected_nodes.extend([k, m, q])
p.connected_nodes.extend([s, m, q])
q.connected_nodes.extend([p, n])
r.connected_nodes.extend([h, d])
s.connected_nodes.extend([l, p])
# Check if a is connected to q
print a.dfs_is_connected(q)
print a.dfs_is_connected(d)
print p.dfs_is_connected(h)
To find this out, you just need to run simple DFS or BFS algorithm on one of the nodes, it'll find all reachable nodes within a continuous component of the graph, so you just mark it down if you've found the other node during the run of algorithm.
There is a way to use Dijkstra to find the path. If there is an edge between two nodes put 1 for weight, if there is no node, put weight of sys.maxint. Then when the min path is calculated, if it is larger than the number of nodes - there is no path between them.
Another approach is to first find the strongly connected components of the graph. If the nodes are on the same strong component then use Dijkstra to find the path, otherwise there is no path that connects them.
You could take a look at the A* Path Finding Algorithm (which uses heuristics to make it more efficient than Dijkstra's, so if there isn't anything you can exploit in your problem, you might be better off using Dijkstra's algorithm. You would need positive weights though. If this is not something you have in your graph, you could simply give each edge a weight of 1).
Looking at the pseudo code on Wikipedia, A* moves from one node to another by getting the neighbours of the current node. Dijkstra's Algorithm keeps an adjacency list so that it knows which nodes are connected to each other.
Thus, if you where to start from node H, you could only go to R and D. Since these nodes are not connected to the others, the algorithm will not go through the other nodes.
You can find strongly connected components(SCC) of your graph and then check if nodes of interest in one component or not. In your example H-R-D will be first component and rest second, so for H and R result will be true but H and A false.
See SCC algorithm here: https://class.coursera.org/algo-004/lecture/53.

Categories

Resources