Random ultrametric trees - python

I've implemented a program on python which generates random binary trees. So now I'd like to assign to each internal node of the tree a distance to make it ultrametric. Then, the distance between the root and any leaves must be the same. If a node is a leaf then the distance is null. Here is a node :
class Node() :
def __init__(self, G = None , D = None) :
self.id = ""
self.distG = 0
self.distD = 0
self.G = G
self.D = D
self.parent = None
My idea is to set the distance h at the beginning and to decrease it as an internal node is found but its working only on the left side.
def lgBrancheRand(self, h) :
self.distD = h
self.distG = h
hrandomD = round(np.random.uniform(0,h),3)
hrandomG = round(np.random.uniform(0,h),3)
if self.D.D is not None :
self.D.distD = hrandomD
self.distD = round(h-hrandomD,3)
lgBrancheRand(self.D,hrandomD)
if self.G.G is not None :
self.G.distG = hrandomG
self.distG = round(h-hrandomG,3)
lgBrancheRand(self.G,hrandomG)

In summary, you would create random matrices and apply UPGMA to each.
More complete answer below
Simply use the UPGMA algorithm. This is a clustering algorithm used to resolve a pairwise matrix.
You take the total genetic distance between two pairs of "taxa" (technically OTUs) and divide it by two. You assign the closest members of the pairwise matrix as the first 'node'. Reformat the matrix so these two pairs are combined into a single group ('removed') and find the next 'nearest neighbor' ad infinitum. I suspect R 'ape' will have a ultrametric algorhithm which will save you from programming. I see that you are using Python, so BioPython MIGHT have this (big MIGHT), personally I would pipe this through a precompiled C program and collect the results via paup that sort of thing. I'm not going to write code, because I prefer Perl and get flamed if any Perl code appears in a Python question (the Empire has established).
Anyway you will find this algorhithm produces a perfect ultrametric tree. Purests do not like ultrametric trees derived throught this sort of algorithm. However, in your calculation it could be useful because you could find the phylogeny from real data , which is most "clock-like" against the null distribution you are producing. In this context it would be cool.
You might prefer to raise the question on bioinformatics stackexchange.

Related

CadQuery: Selecting an edge by index (Filleting specific edges)

I come from the engineering CAD world and I'm creating some designs in CadQuery. What I want to do is this (pseudocode):
edges = part.edges()
edges[n].fillet(r)
Or ideally have the ability to do something like this (though I can't find any methods for edge properties). Pseudocode:
edges = part.edges()
for edge in edges:
if edge.length() > x:
edge.fillet(a)
else:
edge.fillet(b)
This would be very useful when a design contains non-orthogonal faces. I understand that I can select edges with selectors, but I find them unnecessarily complicated and work best with orthogonal faces. FreeCAD lets you treat edges as a list.
I believe there might be a method to select the closest edge to a point, but I can't seem to track it down.
If someone can provide guidance that would be great -- thank you!
Bonus question: Is there a way to return coordinates of geometry as a list or vector? e.g.:
origin = cq.workplane.center().val
>> [x,y,z]
(or something like the above)
Take a look at this code, i hope this will be helpful.
import cadquery as cq
plane1 = cq.Workplane()
block = plane1.rect(10,12).extrude(10)
edges = block.edges("|Z")
filleted_block = edges.all()[0].fillet(0.5)
show(filleted_block)
For the posterity. To select multiple edges eg. for chamfering you can use newObject() on Workplane. The argument is a list of edges (they have to be cq.occ_impl.shapes.Edge instances, NOT cq.Workplane instances).
import cadquery as cq
model = cq.Workplane().box(10, 10, 5)
edges = model.edges()
# edges.all() returns worplanes, we have to get underlying geometry
selected = list(map(lambda x: x.objects[0], edges.all()))
model_with_chamfer = model.newObject(selected).chamfer(1)
To get edge length you can do something like this:
edge = model.edges().all()[0] # This select one 'random' edge
length = edge.objects[0].Length()
edge.Length() doesn't work since edge is Workplane instance, not geometry instance.
To get edges of certain length you can just create dict with edge geometry and length and filter it using builtin python's filter(). Here is a snippet of my implementation for chamfering short edges on topmost face:
top_edges = model.edges(">Z and #Z")
def get_length(edge):
try:
return edge.vals()[0].Length()
except Exception:
return 0.0
# Inside edges are shorter - filter only those
edge_len_list = list(map(
lambda x: (x.objects[0], get_length(x)),
top_edges.all()))
avg = mean([a for _, a in edge_len_list])
selected = filter(lambda x: x[1] < avg, edge_len_list)
selected = [e for e, _ in selected]
vertical_edges = model.edges("|Z").all()
selected.extend(vertical_edges)
model = model.newObject(selected)
model = model.chamfer(chamfer_size)

Improving BFS performance with some kind of memoization

I have this issue that I'm trying to build an algorithm which will find distances from one vertice to others in graph.
Let's say with the really simple example that my network looks like this:
network = [[0,1,2],[2,3,4],[4,5,6],[6,7]]
I created a BFS code which is supposed to find length of paths from the specified source to other graph's vertices
from itertools import chain
import numpy as np
n = 8
graph = {}
for i in range(0, n):
graph[i] = []
for communes in communities2:
for vertice in communes:
work = communes.copy()
work.remove(vertice)
graph[vertice].append(work)
for k, v in graph.items():
graph[k] = list(chain(*v))
def bsf3(graph, s):
matrix = np.zeros([n,n])
dist = {}
visited = []
queue = [s]
dist[s] = 0
visited.append(s)
matrix[s][s] = 0
while queue:
v = queue.pop(0)
for neighbour in graph[v]:
if neighbour in visited:
pass
else:
matrix[s][neighbour] = matrix[s][v] + 1
queue.append(neighbour)
visited.append(neighbour)
return matrix
bsf3(graph,2)
First I'm creating graph (dictionary) and than use the function to find distances.
What I'm concerned about is that this approach doesn't work with larger networks (let's say with 1000 people in there). And what I'm thinking about is to use some kind of memoization (actually that's why I made a matrix instead of list). The idea is that when the algorithm calculates the path from let's say 0 to 3 (what it does already) it should keep track for another routes in such a way that matrix[1][3] = 1 etc.
So I would use the function like bsf3(graph, 1) it would not calculate everything from scratch, but would be able to access some values from matrix.
Thanks in advance!
Knowing this not fully answer your question, but this is another approach you cabn try.
In networks you will have a routing table for each node inside your network. You simple save a list of all nodes inside the network and in which node you have to go. Example of routing table of node D
A -> B
B -> B
C -> E
D -> D
E -> E
You need to run BFS on each node to build all routing table and it will take O(|V|*(|V|+|E|). The space complexity is quadratic but you have to check all possible paths.
When you create all this information you can simple start from a node and search for your destination node inside the table and find the next node to go. This will give a more better time complexity (if you use the right data structure for the table).

Compute reachability of elements in a list of tuples

I have a list of tuples like this.
a = [(1,2),(1,3),(1,4),(2,5),(6,5),(7,8)]
In this list 1 relates to 2 and then 2 relates to 5 and 5 relates to 6 therefore 1 relates to 6. Similarly I need to find the relations between other elements in tuples. I need a function that takes the input values and outputs as follows:
input = (1,6) #output = True
input = (5,3) #output = True
input = (2,8) #output = False
I do not have knowledge of itertools or map functions. Can they be used to solve these types of problems?
And for the sake of curiosity and interest where can I find these types of questions to solve and where are these types of problems encountered in real life situations?
This can be easily done by considering the tuples as edges in a graph. The question is then reduced to checking if there is a path between the two nodes.
There exists lots of nice libraries for this, see e.g. networkx
import networkx as nx
a = [(1,2),(1,3),(1,4),(2,5),(6,5),(7,8)]
G = nx.Graph(a)
nx.has_path(G, 1, 6) # True
nx.has_path(G, 5, 3) # True
nx.has_path(G, 2, 8) # False
This answer here nicely states your problem as a graph problem, where every time you need to run your algorithm you need to check for the existence of a path between your input vertices. The time complexity for every query then depends on the size, order, diameter, degree of the underlying graph.
However, if you intend to run this algorithm many times with the same array a, it may be worth doing some preprocessing on the input graph to find the connected components (Wikipedia : connected components) first. In that case you can get constant time for every query. Here is the code I suggest :
# NOTE : tested using python 3.6.1
# WARNING : no input sanitization
a = [(1,2),(1,3),(1,4),(2,5),(6,5),(7,8)]
n = 8 # order of the underlying graph
# prepare graph as lists of neighbors for every vertex, i.e. adjacency lists (extra unused vertex '0', just to match the value range of the problem)
graph = [[] for i in range(n+1)]
for edge in a:
graph[edge[0]].append(edge[1])
graph[edge[1]].append(edge[0])
print( "graph : " + str(graph) )
# list of unprocessed vertices : contains all of them at the beginning
unprocessed_vertices = {i for i in range(1,n+1)}
# subroutine to discover the connected component of a vertex
def build_component():
component = [] # current connected component
curr_vertices = {unprocessed_vertices.pop()} # locally unprocessed vertices, initialize with one of the globally unprocessed vertices
while len(curr_vertices) > 0:
curr_vertex = curr_vertices.pop() # vertex to be processed
# add unprocessed neighbours of current vertex to the set of vertices to process
for neighbour in graph[curr_vertex]:
if neighbour in unprocessed_vertices:
curr_vertices.add(neighbour)
unprocessed_vertices.remove(neighbour)
component.append(curr_vertex)
return component
# main algorithm : graph traversal on multiple connected components
components = []
while len(unprocessed_vertices) > 0:
components.append( build_component() )
print( "components : " + str(components) )
# assign a number to each component
component_numbers = [None] * (n+1)
curr_number = 1
for comp in components:
for vertex in comp:
component_numbers[vertex] = curr_number
curr_number += 1
print( "component_numbers : " + str(component_numbers) )
# main functionality
def is_connected( pair ):
return component_numbers[pair[0]] == component_numbers[pair[1]]
# run main functionnality on inputs : every call is executed in constant time now, regardless of the size of the graph
print( is_connected( (1,6) ) )
print( is_connected( (5,3) ) )
print( is_connected( (2,8) ) )
I don't really know about the most likely situations where this problem could be encountered, but I suppose it can have application is some clustering tasks, or maybe if you want to know if it is possible to go from one place to another. If the edges of the graph represent dependencies between modules, this problem would tell you if two parts depend on each other, so maybe some potential applications in compiling or the managment of large projects. The underlying problem is a "Connected component" problem which is among the problems we know polynomial algorithms for.
It is generally very useful to model these kind of problems with graphs as these objects have a very simple structure, and most of the time we can reduce the original problem to a well known problem on graphs.

Can't implement GJK distance algorithm

I'm trying to design my own physics engine from scratch, as well as the vector/matrix libraries.
Everything worked beautifully so far, until I tried to implement collision detection in my library. First with SAT, worked great for detecting, but I wanted to find the distance between the objects as well. Then I tried to implement the GJK distance algorithm, just to see if I can find the distance between the origin and a polygon. But it just doesn't work, the smallest distance perceived by the algorithm that I implemented was one of the vertex of the polygon:
I know I made the other libraries from scratch, but I'm positive that they are working. Anyways, here's the code where I've implemented the GJK:
#objectL[0] is a hexagon
v = objectL[0].nodes[0]
W = []
u = 0
close_enough = False
while not close_enough and v != Vector(0,0):
w = objectL[0].support(-v)
d = v*w/abs(v) #*:dot product abs:magnitude
u = max(u,d)
close_enough = abs(v) - u <= 0.0001
if not close_enough:
W.append(w)
while len(W)>2:
del W[0]
v = Vector(0,0).vectorToLine(*W) #distance from the origin to the simplex
#formed by W
And now the support method:
def support(self,axis):
maxP = self.nodes[0]*axis #dot product of first vertex with the axis
n = self.nodes[0]
for node in self.nodes[1:]:
p = node*axis
if p>maxP:
maxP = p
n = node
return node
Those are the code snippets, that I think is where the error is, but I can't find it. The GJK algorithm I've copied from here. Thanks!
Edit:
Here is my project(implemented in pygame)
Ok, found the error. Which wasn't on the implementation, but rather on the functions that I've previously made: the support, which returned node instead of n and the vectorToLine function which returned an incorrect vector(negative value).
Also for those that are reading this post years from now, and trying to implement this algorithm, please note that I only changed the while len(W)>2 part to:
while len(W)>2:
maxD = 0
for w in W:
if abs(w)>maxD:
maxD = w
W.remove(maxD)
Which removes the farthest point of the simplex/triangle, so it gets the two closest point(to the origin) to continue the algorithm.

Algorithm Is Node A Connected to Node B in Graph

I am looking for an algorithm to check for any valid connection (shortest or longest) between two arbitrary nodes on a graph.
My graph is fixed to a grid with logical (x, y) coordinates with north/south/east/west connections, but nodes can be removed randomly so you can't assume that taking the edge with coords closest to the target is always going to get you there.
The code is in python. The data structure is each node (object) has a list of connected nodes. The list elements are object refs, so we can then search that node's list of connected nodes recursively, like this:
for pnode in self.connected_nodes:
for cnode in pnode.connected_nodes:
...etc
I've included a diagram showing how the nodes map to x,y coords and how they are connected in north/east/south/west. Sometimes there are missing nodes (i.e between J and K), and sometimes there are missing edges (i.e between G and H). The presence of nodes and edges is in flux (although when we run the algorithm, it is taking a fixed snapshot in time), and can only be determined by checking each node for it's list of connected nodes.
The algorithm needs to yield a simple true/false to whether there is a valid connection between two nodes. Recursing through every list of connected nodes explodes the number of operations required - if the node is n edges away, it requires at most 4^n operations. My understanding is something like Dijistrka's algorithm works by finding the shortest path based on edge weights, but if there is no connection at all then would it still work?
For some background, I am using this to model 2D destructible objects. Each node represents a chunk of the material, and if one or more nodes do not have a connection to the rest of the material then it should separate off. In the diagram - D, H, R - should pare off from the main body as they are not connected.
UPDATE:
Although many of the posted answers might well work, DFS is quick, easy and very appropriate. I'm not keen on the idea of sticking extra edges between nodes with high value weights to use Dijkstra because node's themselves might disappear as well as edges. The SSC method seems more appropriate for distinguishing between strong and weakly connected graph sections, which in my graph would work if there was a single edge between G and H.
Here is my experiment code for DFS search, which creates the same graph as shown in the diagram.
class node(object):
def __init__(self, id):
self.connected_nodes = []
self.id = id
def dfs_is_connected(self, node):
# Initialise our stack and our discovered list
stack = []
discovered = []
# Declare operations count to track how many iterations it took
op_count = 0
# Push this node to the stack, for our starting point
stack.append(self)
# Keeping iterating while the stack isn't empty
while stack:
# Pop top element off the stack
current_node = stack.pop()
# Is this the droid/node you are looking for?
if current_node.id == node.id:
# Stop!
return True, op_count
# Check if current node has not been discovered
if current_node not in discovered:
# Increment op count
op_count += 1
# Is this the droid/node you are looking for?
if current_node.id == node.id:
# Stop!
return True, op_count
# Put this node in the discovered list
discovered.append(current_node)
# Iterate through all connected nodes of the current node
for connected_node in current_node.connected_nodes:
# Push this connected node into the stack
stack.append(connected_node)
# Couldn't find the node, return false. Sorry bud
return False, op_count
if __name__ == "__main__":
# Initialise all nodes
a = node('a')
b = node('b')
c = node('c')
d = node('d')
e = node('e')
f = node('f')
g = node('g')
h = node('h')
j = node('j')
k = node('k')
l = node('l')
m = node('m')
n = node('n')
p = node('p')
q = node('q')
r = node('r')
s = node('s')
# Connect up nodes
a.connected_nodes.extend([b, e])
b.connected_nodes.extend([a, f, c])
c.connected_nodes.extend([b, g])
d.connected_nodes.extend([r])
e.connected_nodes.extend([a, f, j])
f.connected_nodes.extend([e, b, g])
g.connected_nodes.extend([c, f, k])
h.connected_nodes.extend([r])
j.connected_nodes.extend([e, l])
k.connected_nodes.extend([g, n])
l.connected_nodes.extend([j, m, s])
m.connected_nodes.extend([l, p, n])
n.connected_nodes.extend([k, m, q])
p.connected_nodes.extend([s, m, q])
q.connected_nodes.extend([p, n])
r.connected_nodes.extend([h, d])
s.connected_nodes.extend([l, p])
# Check if a is connected to q
print a.dfs_is_connected(q)
print a.dfs_is_connected(d)
print p.dfs_is_connected(h)
To find this out, you just need to run simple DFS or BFS algorithm on one of the nodes, it'll find all reachable nodes within a continuous component of the graph, so you just mark it down if you've found the other node during the run of algorithm.
There is a way to use Dijkstra to find the path. If there is an edge between two nodes put 1 for weight, if there is no node, put weight of sys.maxint. Then when the min path is calculated, if it is larger than the number of nodes - there is no path between them.
Another approach is to first find the strongly connected components of the graph. If the nodes are on the same strong component then use Dijkstra to find the path, otherwise there is no path that connects them.
You could take a look at the A* Path Finding Algorithm (which uses heuristics to make it more efficient than Dijkstra's, so if there isn't anything you can exploit in your problem, you might be better off using Dijkstra's algorithm. You would need positive weights though. If this is not something you have in your graph, you could simply give each edge a weight of 1).
Looking at the pseudo code on Wikipedia, A* moves from one node to another by getting the neighbours of the current node. Dijkstra's Algorithm keeps an adjacency list so that it knows which nodes are connected to each other.
Thus, if you where to start from node H, you could only go to R and D. Since these nodes are not connected to the others, the algorithm will not go through the other nodes.
You can find strongly connected components(SCC) of your graph and then check if nodes of interest in one component or not. In your example H-R-D will be first component and rest second, so for H and R result will be true but H and A false.
See SCC algorithm here: https://class.coursera.org/algo-004/lecture/53.

Categories

Resources