I want to implement a Binary Tree in Python. I sumit my code. What I would like to do is to
set the height of the Binary Tree with the variable L. But, when I implement the code, it seems that the code has created a Binary Tree that is greater than I expected.
I arrive to this conclusion because when I set the height as 1 and I do print(node.right.right.right), I still get 1.
class Tree:
def __init__(self,x,left=None,right=None):
self.x=x
self.left=left
self.right=right
def one_tree(self,node):
node=Tree(1)
node.right=Tree(1)
node.left=Tree(1)
return node
node=Tree(1)
node=node.one_tree(node)
L=1
while L>0:
node=node.one_tree(node)
node.left=node
node.right=node
L=L-1
print(node.right.right.right.right)
I found a problem with your code. one_tree method overwrites the argument node itself.
class Tree:
def __init__(self, x, left=None, right=None):
self.x = x
self.left = left
self.right = right
def one_tree(self, node):
node = Tree(1) # This assignment statement overwrites the argument 'node'.
node.right = Tree(1)
node.left = Tree(1)
return node
one_tree method gets an argument node but the first line of this method overwrites it like this node = Tree(1). Whatever the method gets as an argument, the method always has a new instance of Tree as node variable.
Several issues:
one_tree doesn't use the node that you pass as argument, so whatever you pass to it, the returned tree will always have 3 nodes (a root with 2 children).
one_tree is a method that doesn't use self, so it really should be a class method, not an instance method
If the intended algorithm was to add a level above the already created tree, and have the new root's children reference the already existing tree, then you would need to only create one node, not three, and let the two children be the given node.
Not a problem, but your loop is not really the "python way" to loop L times. Use range instead.
This means your code could be this:
class Tree:
def __init__(self, x, left=None, right=None):
self.x = x
self.left = left
self.right = right
node = Tree(1)
L = 1
for _ in range(L):
node = Tree(1, node, node)
Now you should still be careful with this tree, as it only has L+1 unique node instances, where all "nodes" on the same level are actually the same node instance. All the rest are references that give the "impression" of having a tree with many more nodes. Whenever you start mutating nodes in that tree, you'll see undesired effects.
If you really need separate node instances for all nodes, then the algorithm will need to be adapted. You could then use recursion:
def create(height):
if height == 0: # base case
return Tree(1)
return Tree(1, create(height-1), create(height-1))
L = 1
node = create(L)
Related
How to initialize head in this implementation of LinkedList in python?
I know a Node class can be defined that will make things easier.But I am trying to do in this way(like in C++).
class LinkedList:
def __init__(self,val=None,next=None):
self.val = val
self.next = next
If you just have this one class, then it actually serves as a Node class, and you lack the container class that would have the head member.
This has as consequence that the program will have to manage the head itself. The variable that will be used to reference a linked list could be that head variable, but that also means an empty list will be represented by None.
For instance:
head = None # Empty list
head = LinkedList(1, head) # Add the first node to it
head = LinkedList(2, head) # Prepend the second node to it
head = LinkedList(3, head) # Prepend the third node to it.
# Print the values in the list:
node = head
while node:
print(node.val, end=" ")
node = node.next
print()
Output will be:
3 2 1
Is it possible to write this single linked list program in the class Node and using all methods of class LinkedList inside that by eliminating the class LinkedList and using only one class for all this ? If it is possible then why we prefer writing this way and why not with a single class ?
Some say we use it to keep a track of head node but I don't get it. We can simply use a head named variable to store the head node and then use it further in other operations. Then why a different class for head node.
class Node:
def __init__(self, data):
self.data = data
self.next = None
class LinkedList:
def __init__(self):
self.head = None
def push(self, new_data):
new_node = Node(new_data)
new_node.next = self.head
self.head = new_node
def deleteNode(self, key):
temp = self.head
if (temp is not None):
if (temp.data == key):
self.head = temp.next
temp = None
return
while(temp is not None):
if temp.data == key:
break
prev = temp
temp = temp.next
if(temp == None):
return
prev.next = temp.next
temp = None
def printList(self):
temp = self.head
while(temp):
print (" %d" %(temp.data)),
temp = temp.next
There are lots and lots of ways to implement the same interface as this LinkedList class.
One of those ways would be to give the LinkedList class data and next fields, where next points to a linked list... but why do you think that is better? The Node class in your example is only used inside a LinkedList. It has no external purpose at all. The author of this LinkedList class made two classes so he could separate the operations applied to nodes from the operations applied to the list as a whole, because that's how he liked to think about it. Whether you think your way is better or worse is a matter of choice...
But here's the real reason why it's better to keep LinkedList separate from nodes:
In real life programming, you will encounter many singly-linked lists. Probably none of these will be instances of any kind of List class. You will just have a bunch of objects that are linked together by some kind of next pointer. You will essentially have only nodes. You will want to perform list operations on those linked nodes without adding list methods to them, because those node objects will have their own distinct purposes -- they are not meant to be lists, and you will not want to make them lists.
So this LinkedList class you have above is teaching you how to perform those list operations on linked objects. You will never write this class in real life, but you will use these techniques many many times.
I want to print my binary tree vertical and the nodes should be random numbers, but output that I get is not what I want.
For example I receive something like this:
497985
406204
477464
But program should print this:
.....497985
406204
.....477464
I don't know where is the problem. Im begginer at programming so if anyone can help me it would be great.
from numpy.random import randint
import random
from timeit import default_timer as timer
class binarytree:
def __init__(self):
self.elem = 0
self.left = None
self.right = None
def printtree(tree, h):
if tree is not None:
printtree(tree.right, h+1)
for i in range(1, h):
print(end = ".....")
print(tree.elem)
printtree(tree.left, h+1)
def generate(tree, N, h):
if N == 0:
tree = None
else:
tree = binarytree()
x = randint(0, 1000000)
tree.elem = int(x)
generate(tree.left, N // 2, h)
generate(tree.right, N - N // 2 - 1, h)
printtree(tree, h)
tree = binarytree()
generate(tree, 3, 0)
I've re-written your code to be a little more Pythonic, and fixed what appeared
to be some bugs:
from random import randrange
class BinaryTree:
def __init__(self, val=0, left=None, right=None):
self.val = val
self.left = left
self.right = right
#classmethod
def generate_random_tree(cls, depth):
if depth > 0:
return cls(randrange(1000000),
cls.generate_random_tree(depth - 1),
cls.generate_random_tree(depth - 1))
return None
def __str__(self):
"""
Overloaded method to format tree as string
"""
return "\n".join(self._format())
def _format(self, cur_depth=0):
"""
Format tree as string given current depth. This is a generator of
strings which represent lines of the representation.
"""
if self.right is not None:
yield from self.right._format(cur_depth + 1)
yield "{}{}".format("." * cur_depth, self.val)
if self.left is not None:
yield from self.left._format(cur_depth + 1)
print(BinaryTree.generate_random_tree(4))
print()
print(BinaryTree(1,
BinaryTree(2),
BinaryTree(3,
BinaryTree(4),
BinaryTree(5))))
This outputs a tree, which grows from the root at the middle-left to leaves at
the right:
...829201
..620327
...479879
.746527
...226199
..463199
...498695
987559
...280755
..168727
...603817
.233132
...294525
..927571
...263402
..5
.3
..4
1
.2
Here it's printed one random tree, and one tree that I manually constructed.
I've written it so that the "right" subtree gets printed on top - you can switch
this around if you want.
You can make some further adjustments to this by making more dots at once (which
I think was your h parameter?), or adapting bits into whatever program
structure you want.
This code works because it makes sure it builds a whole tree first, and then
later prints it. This clear distinction makes it easier to make sure you're
getting all of it right. Your code was a little confused in this regard - your
generate function did instantiate a new BinaryTree at each call, but you
never attached any of them to each other! This is because when you do
tree = ... inside your function, all you're doing is changing what the local
name tree points to - it doesn't change the attribute of the tree from higher
up that you pass to it!
Your generate also calls printtree every time, but printtree also calls
itself... This just doesn't seem like it was entirely right.
In my program, this bit generates the random tree:
def generate_random_tree(cls, depth):
if depth > 0:
return cls(randrange(1000000),
cls.generate_random_tree(depth - 1),
cls.generate_random_tree(depth - 1))
return None
The classmethod bit is just some object-oriented Python stuff that isn't very
important for the algorithm. The key thing to see is that it generates two new
smaller trees, and they end up being the left and right attributes of the
tree it returns. (This also works because I did a little re-write of
__init__).
This bit basically prints the tree:
def _format(self, cur_depth=0):
"""
Format tree as string given current depth. This is a generator of
strings which represent lines of the representation.
"""
if self.right is not None:
yield from self.right._format(cur_depth + 1)
yield "{}{}".format("." * cur_depth, self.val)
if self.left is not None:
yield from self.left._format(cur_depth + 1)
It's a generator of lines - but you could make it directly print the tree by
changing each yield ... to print(...) and removing the yield froms. The
key things here are the cur_depth, which lets the function be aware of how
deep in the tree it currently is, so it knows how many dots to print, and the
fact that it recursively calls itself on the left and right subtrees (and you
have to check if they're None or not, of course).
(This part is what you could modify to re-introduce h).
I am learning data structure in Python and I've encountered code that I don't understand.
I have a node class
class Node():
def __init__(self, data=None, next=None):
self.data = data
self.next = next
and linkedlist class
class LinkedList():
def __init__(self):
self.head = Node()
...
...
def reversePrint(self, node=None):
if node == None:
node = self.head
if node.next:
self.reversePrint(node.next)
print(node.data)
the methods append and delete and find are woking fine
But I can't seem to understand how this method reverse_print renders the linkedlist nodes in reverse order i would expect to print only the last node data rather than the whole linkedlist nodes in reverse order.
Here's my code
Suppose your linked list is
Node A: data="alpha"
Node B: data="beta"
Node C: data="gamma"
You call ll.reversePrint() and node gets set to the head, which is Node A.
reversePrint with node=Node A, checks for the next node (which is Node B), and calls reversePrint, passing in Node B.
-> reversePrint with node=Node B, checks for the next node (which is Node C), and calls reversePrint, passing in Node C.
-> -> reversePrint with node=Node C, checks for the next node (there isn't one).
-> -> reversePrint with node=Node C prints the node's data: "gamma".
-> reversePrint with node=Node B prints the node's data: "beta".
reversePrint with node=Node A prints the node's data: "alpha".
The arrows indicate the depth of recursion.
The magic happens in these two lines:
if node.next:
self.reversePrint(node.next)
If the node has a successor, it'll call the same method again, all the way up to the end of the list. Then it'll unwind the call stack and, in reverse order, print the data in each node (because print(node.data) appears after the recursive call to reversePrint). If you switch the print statement and the recursive call it'll print it in forward order, which is a bit easier to grasp.
I'm currently working on implementing a Fibonacci heap in Python for my own personal development. While writing up the object class for a circular, doubly linked-list, I ran into an issue that I wasn't sure of.
For fast membership testing of the linked-list (in order to perform operations like 'remove' and 'merge' faster), I was thinking of adding a hash-table (a python 'set' object) to my linked-list class. See my admittedly very imperfect code below for how I did this:
class Node:
def __init__(self,value):
self.value = value
self.degree = 0
self.p = None
self.child = None
self.mark = False
self.next = self
self.prev = self
def __lt__(self,other):
return self.value < other.value
class Linked_list:
def __init__(self):
self.root = None
self.nNodes = 0
self.members = set()
def add_node(self,node):
if self.root == None:
self.root = node
else:
self.root.next.prev = node
node.next = self.root.next
self.root.next = node
node.prev = self.root
if node < self.root:
self.root = node
self.members.add(node)
self.nNodes = len(self.members)
def find_min():
min = None
for element in self.members:
if min == None or element<min:
min = element
return min
def remove_node(self,node):
if node not in self.members:
raise ValueError('node not in Linked List')
node.prev.next, node.next.prev = node.next, node.prev
self.members.remove(node)
if self.root not in self.members:
self.root = self.find_min()
self.nNodes -=1
def merge_linked_list(self,LL2):
for element in self.members&LL2.members:
self.remove_node(element)
self.root.prev.next = LL2.root
LL2.root.prev.next = self.root
self.root.prev, LL2.root.prev = LL2.root.prev, self.root.prev
if LL2.root < self.root:
self.root = LL2.root
self.members = self.members|LL2.members
self.nNodes = len(self.members)
def print_values(self):
print self.root.value
j = self.root.next
while j is not self.root:
print j.value
j = j.next
My question is, does the hash table take up double the amount of space that just implementing the linked list without the hash table? When I look at the Node objects in the hash table, they seem to be in the exact same memory location that they are when just independent node objects. For example, if I create a node:
In: n1 = Node(5)
In: print n1
Out: <__main__.Node instance at 0x1041aa320>
and then put this node in a set:
In: s1 = set()
In: s1.add(n1)
In: print s1
Out: <__main__.Node instance at 0x1041aa320>
which is the same memory location. So it seems like the set doesn't copy the node.
My question is, what is the space complexity for a linked list of size n with a hash-table that keeps track of elements. Is it n or 2n? Is there anything elementary wrong about using a hash table to keep track of elements.
I hope this isn't a duplicate. I tried searching for a post that answered this question, but didn't find anything satisfactory.
Check In-memory size of a Python structure and How do I determine the size of an object in Python? for complete answers in determining size of objects
I have this small results on a 64 bits machine with python 3
>>> import sys
>>> sys.getsizeof (1)
28
>>> sys.getsizeof (set())
224
>>> sys.getsizeof (set(range(100)))
8416
The results are in bytes. This can give you a hint about how big sets are (they are pretty big).
My question is, what is the space complexity for a linked list of size n with a hash-table that keeps track of elements. Is it n or 2n? Is there anything elementary wrong about using a hash table to keep track of elements.
Complexity calculations never make a difference between n and 2n.Optimisation does. And it's commonly said "early optimisation is the root of all evil", to warn about potential optimisation pitfalls. So do as you think is best for supported operations.