Why is map function not effective? - python

I am not struck or anything, but I feel this weird. Below is the code snippet I have worked on:
from hashClass import HashTable
a = HashTable(11)
input_value = list((54,26,93,17,77,31,44,55,20))
map(lambda x: a.put(x,x),input_value))
print(a.data)
I have created my own hash table class. It has a method called put which accepts key-value pair and "HashTable.data" attribute displays all the values in the table.
It works completely fine when I use put method with normal key-value pairs. I know that the solution can be implemented using:
for i in input_value: a.putt(i,i)
But I would like to know why the map function is not effective? When I try to map every input_value with "put" method, it has to add the value to the instance if I am not wrong. My reason is I may not be using the mapped value but syntactically when I am mapping, it is supposed to update the instance variable.
Below is the hash Class I have created for reference.
class HashTable(object):
def __init__(self,size):
self.size = size
self.slots = self.size*[None]
self.data = self.size*[None]
self.values = 0
def put(self,key,value):
hashValue = self.hashFunction(key)
if self.slots[hashValue] == None:
self.slots[hashValue] = key
self.data[hashValue] = value
self.values += 1
elif self.slots[hashValue] == key:
self.data[hashValue] = value
else:
hashValue = self.reHash(hashValue)
while self.slots[hashValue] != None and self.slots[hashValue] != key:
hashValue = self.reHash(hashValue)
self.slots[hashValue] = key
self.data[hashValue] = value
self.values += 1
def reHash(self,oldValue):
return (oldValue+1)%self.size
def __len__(self):
return self.values
def get(self,key):
hashValue = self.hashFunction(key)
if self.slots[hashValue] == None:
return "No Value associated"
elif self.slots[hashValue] == key:
return self.data[hashValue]
def hashFunction(self,key):
return key%self.size

I'm going to go out on a limb here and assume you are using Python3.
With python3, a map facilitates lazy evaluation, meaning it will not carry out its functionality unless it really needs to. What you are trying to do is use a map to produce side effects. Sure, you can do this with:
list(map(lambda x: a.put(x,x), input_value)))
The list() forces evaluation to take place.
However, using map for side effects is somewhat of an anti-pattern. I would prefer something clearer and more idiomatic such as the for loop you mentioned.
As an example:
In [854]: s = set()
In [862]: m = map(lambda x=x: s.add(x), [1, 2, 3])
In [863]: s
Out[863]: set()
Nothing happened to s thus far. Now, apply list() to the map object.
In [864]: list(m)
Out[864]: [None, None, None]
The Nones are classic symptoms of side-effects. However...
In [865]: s
Out[865]: {1, 2, 3}
So, it works. But it certainly doesn't look good.

Related

A memoized function that takes a tuple of strings to return an integer?

Suppose I have arrays of tuples like so:
a = [('shape', 'rectangle'), ('fill', 'no'), ('size', 'huge')]
b = [('shape', 'rectangle'), ('fill', 'yes'), ('size', 'large')]
I am trying to turn these arrays into numerical vectors with each dimension representing a feature.
So the expected output we be something like:
amod = [1, 0, 1] # or [1, 1, 1]
bmod = [1, 1, 2] # or [1, 2, 2]
So the vector that gets created is dependent on what it has seen before (i.e rectangle is still coded as 1 but the new value 'large' gets coded as a next step up as 2).
I think I could use some combination of yield and a memoize function to help me with this. This is what I've tried so far:
def memoize(f):
memo = {}
def helper(x):
if x not in memo:
memo[x] = f(x)
return memo[x]
return helper
#memoize
def verbal_to_value(tup):
u = 1
if tup[0] == 'shape':
yield u
u += 1
if tup[0] == 'fill':
yield u
u += 1
if tup[0] == 'size':
yield u
u += 1
But I keep getting this error:
TypeError: 'NoneType' object is not callable
Is there a way I can create this function that has a memory of what it has seen? Bonus points if it could add keys dynamically so I don't have to hardcode things like 'shape' or 'fill'.
First off: this is my preferred implementation of the memoize
decorator, mostly because of speed ...
def memoize(f):
class memodict(dict):
__slots__ = ()
def __missing__(self, key):
self[key] = ret = f(key)
return ret
return memodict().__getitem__
except for some a few edge cases it has the same effect as yours:
def memoize(f):
memo = {}
def helper(x):
if x not in memo:
memo[x] = f(x)
#else:
# pass
return memo[x]
return helper
but is somewhat faster because the if x not in memo: happens in
native code instead of in python. To understand it you merely need
to know that under normal circumstances: to interpret adict[item]
python calls adict.__getitem__(key), if adict doesn't contain key,
__getitem__() calls adict.__missing__(key) so we can leverage the
python magic methods protocols for our gain...
#This the first idea I had how I would implement your
#verbal_to_value() using memoization:
from collections import defaultdict
work=defaultdict(set)
#memoize
def verbal_to_value(kv):
k, v = kv
aset = work[k] #work creates a new set, if not already created.
aset.add(v) #add value if not already added
return len(aset)
including the memoize decorator, that's 15 lines of code...
#test suite:
def vectorize(alist):
return [verbal_to_value(kv) for kv in alist]
a = [('shape', 'rectangle'), ('fill', 'no'), ('size', 'huge')]
b = [('shape', 'rectangle'), ('fill', 'yes'), ('size', 'large')]
print (vectorize(a)) #shows [1,1,1]
print (vectorize(b)) #shows [1,2,2]
defaultdict is a powerful object that has almost the same logic
as memoize: a standard dictionary in every way, except that when the
lookup fails, it runs the callback function to create the missing
value. In our case set()
Unfortunately this problem requires either access to the tupple that
is being used as the key, or to the dictionary state itself. With the
result that we cannot just write a simple function for .default_factory
But we can write a new object based on the memoize/defaultdict pattern:
#This how I would implement your verbal_to_value without
#memoization, though the worker class is so similar to #memoize,
#that it's easy to see why memoize is a good pattern to work from:
class sloter(dict):
__slots__ = ()
def __missing__(self,key):
self[key] = ret = len(self) + 1
#this + 1 bothers me, why can't these vectors be 0 based? ;)
return ret
from collections import defaultdict
work2 = defaultdict(sloter)
def verbal_to_value2(kv):
k, v = kv
return work2[k][v]
#~10 lines of code?
#test suite2:
def vectorize2(alist):
return [verbal_to_value2(kv) for kv in alist]
print (vectorize2(a)) #shows [1,1,1]
print (vectorize2(b)) #shows [1,2,2]
You might have seen something like sloter before, because it's
sometimes used for exactly this sort of situation. Converting member
names to numbers and back. Because of this, we have the advantage of
being able to reverse things like this:
def unvectorize2(a_vector, pattern=('shape','fill','size')):
reverser = [{v:k2 for k2,v in work2[k].items()} for k in pattern]
for index, vect in enumerate(a_vector):
yield pattern[index], reverser[index][vect]
print (list(unvectorize2(vectorize2(a))))
print (list(unvectorize2(vectorize2(b))))
But I saw those yields in your original post, and they've got me
thinking... what if there was a memoize / defaultdict like object
that could take a generator instead of a function and knew to just
advance the generator rather than calling it. Then I realized ...
that yes generators come with a callable called __next__() which
meant that we didn't need a new defaultdict implementation, just a
careful extraction of the correct member funtion...
def count(start=0): #same as: from itertools import count
while True:
yield start
start += 1
#so we could get the exact same behavior as above, (except faster)
#by saying:
sloter3=lambda :defaultdict(count(1).__next__)
#and then
work3 = defaultdict(sloter3)
#or just:
work3 = defaultdict(lambda :defaultdict(count(1).__next__))
#which yes, is a bit of a mindwarp if you've never needed to do that
#before.
#the outer defaultdict interprets the first item. Every time a new
#first item is received, the lambda is called, which creates a new
#count() generator (starting from 1), and passes it's .__next__ method
#to a new inner defaultdict.
def verbal_to_value3(kv):
k, v = kv
return work3[k][v]
#you *could* call that 8 lines of code, but we managed to use
#defaultdict twice, and didn't need to define it, so I wouldn't call
#it 'less complex' or anything.
#test suite3:
def vectorize3(alist):
return [verbal_to_value3(kv) for kv in alist]
print (vectorize3(a)) #shows [1,1,1]
print (vectorize3(b)) #shows [1,2,2]
#so yes, that can also work.
#and since the internal state in `work3` is stored in the exact same
#format, it be accessed the same way as `work2` to reconstruct input
#from output.
def unvectorize3(a_vector, pattern=('shape','fill','size')):
reverser = [{v:k2 for k2,v in work3[k].items()} for k in pattern]
for index, vect in enumerate(a_vector):
yield pattern[index], reverser[index][vect]
print (list(unvectorize3(vectorize3(a))))
print (list(unvectorize3(vectorize3(b))))
Final comments:
Each of these implementations suffer from storing state in a global
variable. Which I find anti-aesthetic but depending on what you're
planning to do with that vector later, that might be a feature. As I
demonstrated.
Edit:
Another day of meditating on this, and the sorts of situations where I might need it,
I think that I'd encapsulate this feature like this:
from collections import defaultdict
from itertools import count
class slotter4:
def __init__(self):
#keep track what order we expect to see keys
self.pattern = defaultdict(count(1).__next__)
#keep track of what values we've seen and what number we've assigned to mean them.
self.work = defaultdict(lambda :defaultdict(count(1).__next__))
def slot(self, kv, i=False):
"""used to be named verbal_to_value"""
k, v = kv
if i and i != self.pattern[k]:# keep track of order we saw initial keys
raise ValueError("Input fields out of order")
#in theory we could ignore this error, and just know
#that we're going to default to the field order we saw
#first. Or we could just not keep track, which might be
#required, if our code runs to slow, but then we cannot
#make pattern optional in .unvectorize()
return self.work[k][v]
def vectorize(self, alist):
return [self.slot(kv, i) for i, kv in enumerate(alist,1)]
#if we're not keeping track of field pattern, we could do this instead
#return [self.work[k][v] for k, v in alist]
def unvectorize(self, a_vector, pattern=None):
if pattern is None:
pattern = [k for k,v in sorted(self.pattern.items(), key=lambda a:a[1])]
reverser = [{v:k2 for k2,v in work3[k].items()} for k in pattern]
return [(pattern[index], reverser[index][vect])
for index, vect in enumerate(a_vector)]
#test suite4:
s = slotter4()
if __name__=='__main__':
Av = s.vectorize(a)
Bv = s.vectorize(b)
print (Av) #shows [1,1,1]
print (Bv) #shows [1,2,2]
print (s.unvectorize(Av))#shows a
print (s.unvectorize(Bv))#shows b
else:
#run the test silently, and only complain if something has broken
assert s.unvectorize(s.vectorize(a))==a
assert s.unvectorize(s.vectorize(b))==b
Good luck out there!
Not the best approach, but may help you to figure out a better solution
class Shape:
counter = {}
def to_tuple(self, tuples):
self.tuples = tuples
self._add()
l = []
for i,v in self.tuples:
l.append(self.counter[i][v])
return l
def _add(self):
for i,v in self.tuples:
if i in self.counter.keys():
if v not in self.counter[i]:
self.counter[i][v] = max(self.counter[i].values()) +1
else:
self.counter[i] = {v: 0}
a = [('shape', 'rectangle'), ('fill', 'no'), ('size', 'huge')]
b = [('shape', 'rectangle'), ('fill', 'yes'), ('size', 'large')]
s = Shape()
s.to_tuple(a)
s.to_tuple(b)

Nested dict keys as variable

There must be a more graceful way of doing this but I cannot figure out how to create a single function for reading/writing values to different levels of a dict, this is the 'best' that I could come up with:
table = {
'A': {
'B': '2',
'C': {
'D':'3'
}
}
}
first = 'A'
second1 = 'B'
second2 = 'C'
third = 'D'
def oneLevelDict(first):
x = table[first]
print(x)
def twoLevelDict(first, second):
x = table[first][second]
print(x)
def threeLevelDict(first, second, third):
x = table[first][second][third]
print(x)
oneLevelDict(first)
twoLevelDict(first, second1)
threeLevelDict(first, second2, third)
You can use *args to pass an arbitrary number of arguments to a function. You can then use a loop to traverse the levels.
get_any_level(*keys):
d = table
for key in keys:
d = d[key]
return d
Now you have one function that can replace the three you had before:
print(get_any_level(first))
print(get_any_level(first, second1))
print(get_any_level(first, second2, third))
You can use this function to write to an arbitrary level as well:
get_any_level(first)[second1] = 17
A better way might be to have a separate function to write though:
def put_any_level(value, *keys):
get_any_level(*keys[:-1])[keys[-1]] = value
put_any_level(17, first, second1)
value has to come first in the argument list unless you want it to be keyword-only because *keys will consume all positional arguments. This is not necessarily a bad alternative:
def put_any_level(*keys, value):
get_any_level(*keys[:-1])[keys[-1]] = value
The keyword argument adds clarity:
put_any_level(first, second1, value=17)
But it will also lead to an error if you attempt to pass it as a positional argument, e.g. put_any_level(first, second1, 17).
Couple of minor points:
It's conventional to use CamelCase only for class names. Variables and functions are conventionally written in lowercase_with_underscores.
A function should generally do one thing, and do it well. In this case, I've split the task of finding the nested value from the task of displaying it by giving the function a return value.
This can be achieved using *args. Read more about it here
And this is how to do it:
def allLevelDict(*argv):
if len(argv) == 1:
x = table[argv[0]]
print (x)
elif len(argv) == 2:
x = table[argv[0]][argv[1]]
print (x)
elif len(argv) == 3:
x = table[argv[0]][argv[1]][argv[2]]
print (x)
allLevelDict(first)
allLevelDict(first, second1)
allLevelDict(first, second2, third)
Similar to the other suggestions, but perhaps even more graceful, if you like recursion:
table = {'A':{'B':'2','C':{'D':'3'}}}
first = 'A'
second1 = 'B'
second2 = 'C'
third = 'D'
def get_from(x, *keys):
return get_from(x[keys[0]], *keys[1:]) if len(keys) > 0 else x
print(get_from(table, first))
print(get_from(table, first, second1))
print(get_from(table, first, second2, third))
Note: I'm also passing in the table, since I imagine you'd want to be able to use it on other dictionaries also.
Or, if you think shorter isn't always better:
def get_from(x, *keys):
if len(keys) > 0
return get_from(x[keys[0]], *keys[1:])
else:
return x
Normally, recursion can be dangerous, as it's expensive - but since you're unlikely to have incredibly deep dictionaries, I feel it is the right solution here.

Python 3.x: How to compare two lists containing dictionaries where order doesn't matter

I have nested dictionaries that may contain other dictionaries or lists. I need to be able to compare a list (or set, really) of these dictionaries to show that they are equal.
The order of the list is not uniform. Typically, I would turn the list into a set, but it is not possible since there are values that are also dictionaries.
a = {'color': 'red'}
b = {'shape': 'triangle'}
c = {'children': [{'color': 'red'}, {'age': 8},]}
test_a = [a, b, c]
test_b = [b, c, a]
print(test_a == test_b) # False
print(set(test_a) == set(test_b)) # TypeError: unhashable type: 'dict'
Is there a good way to approach this to show that test_a has the same contents as test_b?
You can use a simple loop to check if each of one list is in the other:
def areEqual(a, b):
if len(a) != len(b):
return False
for d in a:
if d not in b:
return False
return True
I suggest writing a function that turns any Python object into something orderable, with its contents, if it has any, in sorted order. If we call it canonicalize, we can compare nested objects with:
canonicalize(test_a) == canonicalize(test_b)
Here's my attempt at writing a canonicalize function:
def canonicalize(x):
if isinstance(x, dict):
x = sorted((canonicalize(k), canonicalize(v)) for k, v in x.items())
elif isinstance(x, collections.abc.Iterable) and not isinstance(x, str):
x = sorted(map(canonicalize, x))
else:
try:
bool(x < x) # test for unorderable types like complex
except TypeError:
x = repr(x) # replace with something orderable
return x
This should work for most Python objects. It won't work for lists of heterogeneous items, containers that contain themselves (which will cause the function to hit the recursion limit), nor float('nan') (which has bizarre comparison behavior, and so may mess up the sorting of any container it's in).
It's possible that this code will do the wrong thing for non-iterable, unorderable objects, if they don't have a repr function that describes all the data that makes up their value (e.g. what is tested by ==). I picked repr as it will work on any kind of object and might get it right (it works for complex, for example). It should also work as desired for classes that have a repr that looks like a constructor call. For classes that have inherited object.__repr__ and so have repr output like <Foo object at 0xXXXXXXXX> it at least won't crash, though the objects will be compared by identity rather than value. I don't think there's any truly universal solution, and you can add some special cases for classes you expect to find in your data if they don't work with repr.
If the elements in both lists are shallow, the idea of sorting them, and then comparing with equality can work. The problem with #Alex's solution is that he is only using "id" - but if instead of id, one uses a function that will sort dictionaries properly, things shuld just work:
def sortkey(element):
if isinstance(element, dict):
element = sorted(element.items())
return repr(element)
sorted(test_a, key=sortkey) == sorted(test_b, key=sotrkey)
(I use an repr to wrap the key because it will cast all elements to string before comparison, which will avoid typerror if different elements are of unorderable types - which would almost certainly happen if you are using Python 3.x)
Just to be clear, if your dictionaries and lists have nested dictionaries themselves, you should use the answer by #m_callens. If your inner lists are also unorderd, you can fix this to work, jsut sorting them inside the key function as well.
In this case they are the same dicts so you can compare ids (docs). Note that if you introduced a new dict whose values were identical it would still be treated differently. I.e. d = {'color': 'red'} would be treated as not equal to a.
sorted(map(id, test_a)) == sorted(map(id, test_b))
As #jsbueno points out, you can do this with the kwarg key.
sorted(test_a, key=id) == sorted(test_b, key=id)
An elegant and relatively fast solution:
class QuasiUnorderedList(list):
def __eq__(self, other):
"""This method isn't as ineffiecient as you think! It runs in O(1 + 2 + 3 + ... + n) time,
possibly better than recursively freezing/checking all the elements."""
for item in self:
for otheritem in other:
if otheritem == item:
break
else:
# no break was reached, item not found.
return False
return True
This runs in O(1 + 2 + 3 + ... + n) flat. While slow for dictionaries of low depth, this is faster for dictionaries of high depth.
Here's a considerably longer snippet which is faster for dictionaries where depth is low and length is high.
class FrozenDict(collections.Mapping, collections.Hashable): # collections.Hashable = portability
"""Adapated from http://stackoverflow.com/a/2704866/1459669"""
def __init__(self, *args, **kwargs):
self._d = dict(*args, **kwargs)
self._hash = None
def __iter__(self):
return iter(self._d)
def __len__(self):
return len(self._d)
def __getitem__(self, key):
return self._d[key]
def __hash__(self):
# It would have been simpler and maybe more obvious to
# use hash(tuple(sorted(self._d.iteritems()))) from this discussion
# so far, but this solution is O(n). I don't know what kind of
# n we are going to run into, but sometimes it's hard to resist the
# urge to optimize when it will gain improved algorithmic performance.
# Now thread safe by CrazyPython
if self._hash is None:
_hash = 0
for pair in self.iteritems():
_hash ^= hash(pair)
self._hash = _hash
return _hash
def freeze(obj):
if type(obj) in (str, int, ...): # other immutable atoms you store in your data structure
return obj
elif issubclass(type(obj), list): # ugly but needed
return set(freeze(item) for item in obj)
elif issubclass(type(obj), dict): # for defaultdict, etc.
return FrozenDict({key: freeze(value) for key, value in obj.items()})
else:
raise NotImplementedError("freeze() doesn't know how to freeze " + type(obj).__name__ + " objects!")
class FreezableList(list, collections.Hashable):
_stored_freeze = None
_hashed_self = None
def __eq__(self, other):
if self._stored_freeze and (self._hashed_self == self):
frozen = self._stored_freeze
else:
frozen = freeze(self)
if frozen is not self._stored_freeze:
self._stored_hash = frozen
return frozen == freeze(other)
def __hash__(self):
if self._stored_freeze and (self._hashed_self == self):
frozen = self._stored_freeze
else:
frozen = freeze(self)
if frozen is not self._stored_freeze:
self._stored_hash = frozen
return hash(frozen)
class UncachedFreezableList(list, collections.Hashable):
def __eq__(self, other):
"""No caching version of __eq__. May be faster.
Don't forget to get rid of the declarations at the top of the class!
Considerably more elegant."""
return freeze(self) == freeze(other)
def __hash__(self):
"""No caching version of __hash__. See the notes in the docstring of __eq__2"""
return hash(freeze(self))
Test all three (QuasiUnorderedList, FreezableList, and UncachedFreezableList) and see which one is faster in your situation. I'll betcha it's faster than the other solutions.

Inversing a list recursively

I created a function that will inverse a list recursively but it uses an global list in witch it puts the elements.
Can this be rewritten so that it won't use an outside variable/list to achieve the same result.
Here is the code:
invs = []
def inv_list(list_, elem):
global invs
if elem is not None:
invs.append(elem)
if not list_:
return invs
else:
try:
el = list_.pop()
inv_list(list_, el)
except Exception:
pass
What about:
def inv_list(lst):
if not lst:
return []
return inv_list(lst[1:]) + lst[:1]
it looks like you are doing a whole lot more work than you need to
def reverse_recurse(a_list):
if not a_list:
return []
return [a_list.pop(),] + reverse_recurse(a_list)
While your implementation could be improved in various ways, when I find that I want to build something recursive without using globals and without making the interface feel dirty is create a nested helper function:
def inv_list(list_):
invs = []
def helper(elem):
if elem is not None:
invs.append(elem)
if not list_:
return invs
else:
try:
el = list_.pop()
return helper(el)
except Exception:
pass
return helper(None)
That way, you can have values that are at the scope of the outer function.
The problematic way to do it is simple, just use default arguments.
def rec_reverse(input=[], output=[]):
if len(input) == 0:
return
else:
output.append(input.pop())
rec_reverse(input, output)
return output
x = list(range(10))
y = list(range(20))
print(rec_reverse(x, []))
print(rec_reverse(y, []))
Just remember to pass a new list to the output, so that you can call it again without getting old values.
Nevertheless, you can use the safe approach without using default arguments:
def rec_reverse(input):
if not input:
return input
else:
return [input.pop(), ] + rec_reverse(input)
And you can also use its recursive equivalent as a lambda expression:
rec_reverse = lambda input=[]: [] if not input else [input.pop(), ] + rec_reverse(input)
Keep in mind though, that there's an even simpler solution without using recursion at all:
x = list(range(10))
rec_reverse = lambda input: input[::-1]
print(rec_reverse(x))
Since in Python, you can reverse any list using extended slice notation.
Also, you can just use reverse() and spare you the trouble.
def reverse(input):
input.reverse()
return input
Building on Rederick Deathwill, here is a simplified version of your function:
def inv_list(list_):
def inner(list_, invs):
if not list_:
return invs
else:
invs.append(list_.pop())
return inner(list_, invs)
return inner(list_, [])
It uses a default value for invs, getting rid of the need for a global variable to hold the inverted list. With subsequent invocation, invs is passed along so that the next call can build on it.
Once the bottom of the call stack is reached, the function returns the reversed list. A nice addition to the original is the return inner(list_, invs) line, which allows the caller to capture the new list as the return value.
This is not the shortest, but I think it is at least readable.

Adding to Linked List

I've tried to write a function that takes three parameters: a linked list, a value, and a new value. The purpose of the function is to add the new value after the value in the linked list. Here's my function.
def addAfter(lis, value, newValue):
tracker = lis
while tracker != None:
if tracker['data'] == value:
newNode = {'data':newValue, 'next': tracker['next']}
tracker['next'] = newNode
break
else:
tracker = tracker['next']
For some reason I can't get this function to do anything. It doesn't change the list. I was wondering if someone could tell me what I'm doing wrong.
The problem is probably in how you're defining your initial nodes or list. For example, the following code works in Python3.4
def addAfter(lis, value, newValue):
tracker = lis
while tracker != None:
if tracker['data'] == value:
newNode = {'data':newValue, 'next': tracker['next']}
tracker['next'] = newNode
break
else:
tracker = tracker['next']
node1 = {'data':3,'next':None}
addAfter(node1,3,4)
print(node1)
print(node1['next'])
This outputs
{'next': {'next': None, 'data': 4}, 'data': 3}
{'next': None, 'data': 4}
As we'd expect it to. So there are several possibilities here
You're not using dictionaries at all - you defined a custom class and overloaded the setitem and getitem. I highly doubt this
You overloaded equals or hash so that the evaluations aren't correct. Doubt this too
You're using non-built-in objects as values and didn't overload the equals, so when you compare Foo() to Foo() it evaluates false. This seems somewhat likely. You might think that things are equal, when in fact Python is checking if they lie at the same memory location, not that their fields match
Your test code is wrong, you're printing out a copy of the data not the original data. This sort of thing is always a possibility
Try testing for each of those and let us know if none pan out. If not, provide a little more context, i.e., a scenario where the break is evident
Could you add a new class such as :
class ListNode:
def __init__(self, x):
self.val = x
self.next = None
and then use the code of en_Knight, otherwise, the code will be hard to read

Categories

Resources