python merge sort within a class - python

I am trying to make merge sort work within a class "Sorter for a python project, using the first code. The problem is that whenever I initialize the code, it calls the error "merge_sort' is not defined", and if I remove the "merge_sort" and use "left = lst[:mid]", it only cuts the list into half & reorganizes, but doesn't complete the script with the whole list. Is there a way to get around this issue? Thanks!!
from sorter import Sorter
unsorted_list = [5, -3, 4, 10, -14, 2, 4, -5]
my_sorter = Sorter()
my_sorter.unsorted_tuple = tuple(unsorted_list)
sorted_list = my_sorter._merge_sort()
print(sorted_list)
My code:
class Sorter():
def __init__(self):
self.unsorted_tuple = tuple([])
self.algorithm = 'default'
def generate_new_tuple(self, n):
new_list = []
for x in range (0, n):
new_list.append(random.randint(0, maxsize))
tuple(new_list)
self.unsorted_tuple = new_list
return None
def _merge_sort(self, reverse=False):
lst = list(self.unsorted_tuple)
result = []
i,j = 0,0
if(len(lst)<= 1):
return lst
mid = int(len(lst)/2)
left = _merge_sort(lst[:mid])
right = _merge_sort(lst[mid:])
while i<len(left) and j<len(right):
if left[i] <= right[j]:
result.append(left[i])
i+=1
else:
result.append(right[j])
j+=1
result += left[i:]
result += right[j:]
return result

You're confused about how classes and methods work.
The compiler is correct (by definition ...): there is no function _merge_sort. That name applies to a method, and must be called with a Sorter object. You have gone to a lot of trouble to set up this class, but then you've ignored those encapsulation protections when you try to recur on the halves of your list:
left = _merge_sort(lst[:mid])
right = _merge_sort(lst[mid:])
You're trying to invoke your method as if it were a common function. Instead, you have to instantiate Sorter objects for each half of the list, set their unsorted attributes to those half-lists, and then you can invoke the method on each of those. Something like:
left_half = Sorter()
left_half.unsorted_tuple = lst[:mid]
left = left_half._merge_sort()
right_half = Sorter()
right_half.unsorted_tuple = lst[mid:]
right = right_half._merge_sort()
Please consider the "weight" of this code; perhaps you can reconfigure your class to better support your needs. For starters, give __init__ another parameter for the initial value of the list/tuple.
Does that get you moving?

Related

(Python) How to override list's method python?

well, i want to add method in list.
So, i made new child class like this.
class list(list):
def findAll(self,position):
data = []
for i in range(len(self)):
if(self[i] == position):
data.append(i)
return data
k = list()
k.append(1)
k.append(2)
k.append(3)
k.append(4)
print(k.findAll(10))
but i want to make code like this.
class list(list):
def findAll(self,position):
data = []
for i in range(len(self)):
if(self[i] == position):
data.append(i)
return data
k = [10,1,2,3,4,5,10,10,10,10,10] #when i make list, i want use '[' and ']'
print(k.findAll(10))#it occur AttributeError: 'list' object has no attribute 'findAll'
how can i make this?
when i make list, i want use '[' and ']'
i tried this code
class list(list):
def findAll(self,position):
data = []
for i in range(len(self)):
if(self[i] == position):
data.append(i)
return data
k = [10,1,2,3,4,5,10,10,10,10,10]
k = list(k)
print(k.findAll(10))
Usually a child class shouldn't have the same name as the parent, especially when it's a standard class, it can lead to lots of confusion down the road.
you could use the same name, but it should be in a particular package, so when it's used, to be sure it's not confused with the other one.
Another thing here, when you want to use your list class, you need to instantiate it.
With this k = [10,1,2,3,4,5,10,10,10,10,10]you instantiate the standard list, also with `k = list(k)' because you use the same name, instead of package.class to distinguish, also because in your class you have no overwritten method that takes a list as argument, no conversion method etc.
The answer already given by the other user should be ok, but so you understand what is what and why I wrote this
You can't override built in type's method.
You can create a new class, that "extends" class "list" (inheritance).
class ExtendedList(list):
def find_all(self, num):
return [i for i in range(len(self)) if self[i] == num]
k = ExtendedList([1, 2, 3, 3, 3, 4, 3])
print(k.find_all(3))
# [2, 3, 4, 6]

Example of how to properly define a class in python

I have defined a function as such:
def quicksort(points):
if len(points) < 2: return points
smaller,equal,larger = [], [], []
pivot_angle = find_polar_angle(random.randint(0, len(points) - 1))
for pt in points:
pt_angle = find_polar_angle(pt)
if pt_angle < pivot_angle:
smaller.append(pt)
elif pt_angle == pivot_angle:
equal.append(pt)
else:
larger.append(pt)
return quicksort(smaller) + sorted(equal, key = find_dist) + quicksort(larger)
Now, I want to change my code - which btw is an implementation of the Graham Scan Algorithm - into an object oriented code. So I went ahead and declared a class in a file MyClasses.py:
from MyFunctions import find_anchor, find_polar_angle, find_dist, find_det, quicksort, graham_scan
class Cluster:
def __init__(self):
self.members = []
self.hull = []
self.anchor = None
self.find_anchor = find_anchor
self.find_polar_angle = find_polar_angle
self.find_dist = find_dist
self.find_det = find_det
self.quicksort = quicksort
self.graham_scan = graham_scan
But of course I have to change my functions as well. I don't want to list all the functions here, that's why I stay with the quicksort function as an example. This is where I struggle a lot, since I don't know the python syntax well enough to be sure about what I am doing here. This is my revised form of quicksort:
def quicksort(self, points):
if len(points) < 2: return points
smaller,equal,larger = [], [], []
pivot_angle = self.find_polar_angle(self, random.randint(0, len(self.members) - 1))
for pt in points:
pt_angle = self.find_polar_angle(self, pt)
if pt_angle < pivot_angle:
smaller.append(pt)
elif pt_angle == pivot_angle:
equal.append(pt)
else:
larger.append(pt)
return self.quicksort(self, smaller) + sorted(self, equal, key = self.find_dist) + self.quicksort(self, larger)
Here's the thing: This function is recursive! So I need it to take smaller, equal and larger as arguments. Later, another function graham_scan is going to call this function as such:
self.members = self.quicksort(self, self.members)
I know there are probably many mistakes in here. That's why I'm asking: Is this last expression a valid expression? I mean I am changing a class-variable (self.members) but I do so not by directly changing it, but by assigning it the return value of quicksort.
Anyways, help is very much appreciated!
To make an existing function as a new property of a new class, try this:
def quicksort():
pass # your custom logic
class Cluster:
def __init__(self):
self.members = []
self.quicksort = quicksort
# more properties
Python has quite a different syntax to C++ or Java.
To say about the second question, all the variables used in quicksort function body are only available in that function only.
About second question. All members of classes are PUBLIC in python. By convention you can add "_" and "__" in front of the names for protected and private respectfully. BUT this does NOT prevent you from accessing them, it just means that you (or whoever reading the code) should not misuse them.
Although, __variable must be accessed with following syntax outside of class:
class Square:
def __init__(self, x):
self.__x = x
def get_surface(self):
return self.__x **2
>>> square1 = Square(5)
>>> print(square1.get_surface())
>>> 25
>>> square1._Square__x = 10
>>> print(square1.get_surface())
>>> 100
>>> square1.__x
>>> **AttributeError: 'Square' object has no attribute '__x'**
Or it will raise AttributeError. Hope this helps

A memoized function that takes a tuple of strings to return an integer?

Suppose I have arrays of tuples like so:
a = [('shape', 'rectangle'), ('fill', 'no'), ('size', 'huge')]
b = [('shape', 'rectangle'), ('fill', 'yes'), ('size', 'large')]
I am trying to turn these arrays into numerical vectors with each dimension representing a feature.
So the expected output we be something like:
amod = [1, 0, 1] # or [1, 1, 1]
bmod = [1, 1, 2] # or [1, 2, 2]
So the vector that gets created is dependent on what it has seen before (i.e rectangle is still coded as 1 but the new value 'large' gets coded as a next step up as 2).
I think I could use some combination of yield and a memoize function to help me with this. This is what I've tried so far:
def memoize(f):
memo = {}
def helper(x):
if x not in memo:
memo[x] = f(x)
return memo[x]
return helper
#memoize
def verbal_to_value(tup):
u = 1
if tup[0] == 'shape':
yield u
u += 1
if tup[0] == 'fill':
yield u
u += 1
if tup[0] == 'size':
yield u
u += 1
But I keep getting this error:
TypeError: 'NoneType' object is not callable
Is there a way I can create this function that has a memory of what it has seen? Bonus points if it could add keys dynamically so I don't have to hardcode things like 'shape' or 'fill'.
First off: this is my preferred implementation of the memoize
decorator, mostly because of speed ...
def memoize(f):
class memodict(dict):
__slots__ = ()
def __missing__(self, key):
self[key] = ret = f(key)
return ret
return memodict().__getitem__
except for some a few edge cases it has the same effect as yours:
def memoize(f):
memo = {}
def helper(x):
if x not in memo:
memo[x] = f(x)
#else:
# pass
return memo[x]
return helper
but is somewhat faster because the if x not in memo: happens in
native code instead of in python. To understand it you merely need
to know that under normal circumstances: to interpret adict[item]
python calls adict.__getitem__(key), if adict doesn't contain key,
__getitem__() calls adict.__missing__(key) so we can leverage the
python magic methods protocols for our gain...
#This the first idea I had how I would implement your
#verbal_to_value() using memoization:
from collections import defaultdict
work=defaultdict(set)
#memoize
def verbal_to_value(kv):
k, v = kv
aset = work[k] #work creates a new set, if not already created.
aset.add(v) #add value if not already added
return len(aset)
including the memoize decorator, that's 15 lines of code...
#test suite:
def vectorize(alist):
return [verbal_to_value(kv) for kv in alist]
a = [('shape', 'rectangle'), ('fill', 'no'), ('size', 'huge')]
b = [('shape', 'rectangle'), ('fill', 'yes'), ('size', 'large')]
print (vectorize(a)) #shows [1,1,1]
print (vectorize(b)) #shows [1,2,2]
defaultdict is a powerful object that has almost the same logic
as memoize: a standard dictionary in every way, except that when the
lookup fails, it runs the callback function to create the missing
value. In our case set()
Unfortunately this problem requires either access to the tupple that
is being used as the key, or to the dictionary state itself. With the
result that we cannot just write a simple function for .default_factory
But we can write a new object based on the memoize/defaultdict pattern:
#This how I would implement your verbal_to_value without
#memoization, though the worker class is so similar to #memoize,
#that it's easy to see why memoize is a good pattern to work from:
class sloter(dict):
__slots__ = ()
def __missing__(self,key):
self[key] = ret = len(self) + 1
#this + 1 bothers me, why can't these vectors be 0 based? ;)
return ret
from collections import defaultdict
work2 = defaultdict(sloter)
def verbal_to_value2(kv):
k, v = kv
return work2[k][v]
#~10 lines of code?
#test suite2:
def vectorize2(alist):
return [verbal_to_value2(kv) for kv in alist]
print (vectorize2(a)) #shows [1,1,1]
print (vectorize2(b)) #shows [1,2,2]
You might have seen something like sloter before, because it's
sometimes used for exactly this sort of situation. Converting member
names to numbers and back. Because of this, we have the advantage of
being able to reverse things like this:
def unvectorize2(a_vector, pattern=('shape','fill','size')):
reverser = [{v:k2 for k2,v in work2[k].items()} for k in pattern]
for index, vect in enumerate(a_vector):
yield pattern[index], reverser[index][vect]
print (list(unvectorize2(vectorize2(a))))
print (list(unvectorize2(vectorize2(b))))
But I saw those yields in your original post, and they've got me
thinking... what if there was a memoize / defaultdict like object
that could take a generator instead of a function and knew to just
advance the generator rather than calling it. Then I realized ...
that yes generators come with a callable called __next__() which
meant that we didn't need a new defaultdict implementation, just a
careful extraction of the correct member funtion...
def count(start=0): #same as: from itertools import count
while True:
yield start
start += 1
#so we could get the exact same behavior as above, (except faster)
#by saying:
sloter3=lambda :defaultdict(count(1).__next__)
#and then
work3 = defaultdict(sloter3)
#or just:
work3 = defaultdict(lambda :defaultdict(count(1).__next__))
#which yes, is a bit of a mindwarp if you've never needed to do that
#before.
#the outer defaultdict interprets the first item. Every time a new
#first item is received, the lambda is called, which creates a new
#count() generator (starting from 1), and passes it's .__next__ method
#to a new inner defaultdict.
def verbal_to_value3(kv):
k, v = kv
return work3[k][v]
#you *could* call that 8 lines of code, but we managed to use
#defaultdict twice, and didn't need to define it, so I wouldn't call
#it 'less complex' or anything.
#test suite3:
def vectorize3(alist):
return [verbal_to_value3(kv) for kv in alist]
print (vectorize3(a)) #shows [1,1,1]
print (vectorize3(b)) #shows [1,2,2]
#so yes, that can also work.
#and since the internal state in `work3` is stored in the exact same
#format, it be accessed the same way as `work2` to reconstruct input
#from output.
def unvectorize3(a_vector, pattern=('shape','fill','size')):
reverser = [{v:k2 for k2,v in work3[k].items()} for k in pattern]
for index, vect in enumerate(a_vector):
yield pattern[index], reverser[index][vect]
print (list(unvectorize3(vectorize3(a))))
print (list(unvectorize3(vectorize3(b))))
Final comments:
Each of these implementations suffer from storing state in a global
variable. Which I find anti-aesthetic but depending on what you're
planning to do with that vector later, that might be a feature. As I
demonstrated.
Edit:
Another day of meditating on this, and the sorts of situations where I might need it,
I think that I'd encapsulate this feature like this:
from collections import defaultdict
from itertools import count
class slotter4:
def __init__(self):
#keep track what order we expect to see keys
self.pattern = defaultdict(count(1).__next__)
#keep track of what values we've seen and what number we've assigned to mean them.
self.work = defaultdict(lambda :defaultdict(count(1).__next__))
def slot(self, kv, i=False):
"""used to be named verbal_to_value"""
k, v = kv
if i and i != self.pattern[k]:# keep track of order we saw initial keys
raise ValueError("Input fields out of order")
#in theory we could ignore this error, and just know
#that we're going to default to the field order we saw
#first. Or we could just not keep track, which might be
#required, if our code runs to slow, but then we cannot
#make pattern optional in .unvectorize()
return self.work[k][v]
def vectorize(self, alist):
return [self.slot(kv, i) for i, kv in enumerate(alist,1)]
#if we're not keeping track of field pattern, we could do this instead
#return [self.work[k][v] for k, v in alist]
def unvectorize(self, a_vector, pattern=None):
if pattern is None:
pattern = [k for k,v in sorted(self.pattern.items(), key=lambda a:a[1])]
reverser = [{v:k2 for k2,v in work3[k].items()} for k in pattern]
return [(pattern[index], reverser[index][vect])
for index, vect in enumerate(a_vector)]
#test suite4:
s = slotter4()
if __name__=='__main__':
Av = s.vectorize(a)
Bv = s.vectorize(b)
print (Av) #shows [1,1,1]
print (Bv) #shows [1,2,2]
print (s.unvectorize(Av))#shows a
print (s.unvectorize(Bv))#shows b
else:
#run the test silently, and only complain if something has broken
assert s.unvectorize(s.vectorize(a))==a
assert s.unvectorize(s.vectorize(b))==b
Good luck out there!
Not the best approach, but may help you to figure out a better solution
class Shape:
counter = {}
def to_tuple(self, tuples):
self.tuples = tuples
self._add()
l = []
for i,v in self.tuples:
l.append(self.counter[i][v])
return l
def _add(self):
for i,v in self.tuples:
if i in self.counter.keys():
if v not in self.counter[i]:
self.counter[i][v] = max(self.counter[i].values()) +1
else:
self.counter[i] = {v: 0}
a = [('shape', 'rectangle'), ('fill', 'no'), ('size', 'huge')]
b = [('shape', 'rectangle'), ('fill', 'yes'), ('size', 'large')]
s = Shape()
s.to_tuple(a)
s.to_tuple(b)

Using the length of a parameter array as the default value of another parameter of the same function

This is my first time asking a question in SO, so if I'm somehow not doing it properly don't hesitate to edit it or ask me to modify it.
I think my question is kind of general, so I'm quite surprised for not having found any previous one related to this topic. If I missed it and this question is duplicated, I'll be very grateful if you could provide a link to where it was already answered.
Imagine I need to implement a function with (at least) three parameters: an array a, a start index and an end index. If not provided, the start parameter should refer to the first position of the array (start = 0), while the end parameter should be set to the last position (end = len(a) - 1). Obviously, the definition:
def function(a, start = 0, end = (len(a) - 1)):
#do_something
pass
does not work, leading to an exception (NameError: name 'a' is not defined). There are some workarounds, such as using end = -1 or end = None, and conditionally assign it to len(a) - 1 if needed inside the body of the function:
def function(a, start = 0, end = -1):
end = end if end != -1 else (len(a) -1)
#do_something
but I have the feeling that there should be a more "pythonic" way of dealing with such situations, not only with the length of an array but with any parameter whose default value is a function of another (non optional) parameter. How would you deal with a situation like that? Is the conditional assignment the best option?
Thanks!
Using a sentinel value such as None is typical:
def func(a, start=0, end=None):
if end is None:
end = # whatever
# do stuff
However, for your actual use case, there's already a builtin way to do this that fits in with the way Python does start/stop/step - which makes your function provide a consistent interface as to the way builtins/other libraries work:
def func(a, *args):
slc = slice(*args)
for el in a[slc]:
print(el)
See https://docs.python.org/2/library/functions.html#slice
If you only want to support start/end in that order, then (note that None effectively means until len(a) when used as end or 0 when used as start):
def func(a, start=0, end=None):
return a[start:end]
This isn't possible, functions can't be executed in the parameter list, you can pass along functions through the parameters but not their output.
def function(a, start = 0):
end = len(a)
Based on the answer provided by #NPE in Function with dependent preset arguments, an alternative to using -1 or (better) None as sentinel values is using an object (a named object?) which can be used even if None is a valid value of the function. For example:
default = object()
def function(a, start = 0, end = default):
if end is default: end = (len(a) - 1)
return start, end
allows a call like: function([1,2,3,4]) which returns (0, 3)
I personally find this solution quite convenient, at least for my own purpose
Edit: Maybe the code is even more readable if we use last instead of default:
last = object()
def function(a, start = 0, end = last):
if end is last: end = (len(a) - 1)
return start, end
I am unsure of your definition of an array or the premise of your problem, but to my understanding you are trying get end to be assigned to the length of a. If so, just declare end out side the arguments. Like so:
def function(a, start=0):
end=len(a)
Or do the conditional like you said:
def function(a, start=0, end=False):
if not end:
end = len(a)
Or simply declare the end variable before calling the function and pass it in the arguments!
Not sure if that answered your question, but hope it helped lol!
This has to be the hackiest code I've ever written. I think it comes close to what you were asking for (another alternative I came up with was using a lambda inside the function definition, but it takes a bit too much room to be pretty IMO):
import inspect
from functools import wraps
class defaultArguments(object):
def __init__(self):
self.lazyArgs = []
def initialize(self, func):
lazyArgs, self.lazyArgs = self.lazyArgs, []
#wraps(func)
def functionWrapper(*args, **kw):
if lazyArgs:
argNames, defaults = inspect.getargspec(func)[::3]
argValues = list(args) + [
kw[y] if y in kw else defaults[x]
for x,y in enumerate(argNames[len(args):])
]
oldGlobals = {}
for n,v in zip(argNames, argValues):
try:
oldGlobals[n] = globals()[n]
except:
oldGlobals[n] = None
if v not in lazyArgs:
globals()[n] = v
else:
globals()[n] = kw[n] = eval(v)
for o,v in oldGlobals.items(): globals()[o] = v
return func(*args, **kw)
return functionWrapper
def __call__(self, x):
self.lazyArgs.append(x)
return x
Using it:
d = defaultArguments()
#d.initialize
def function1(a, start=d('a[-1]'), end=d('len(a)-1')):
print a, start, end
function1([1,2,8])
>>> [1, 2, 8] 8 2
function1([1,2,8,10], end=1)
>>> [1, 2, 8, 10] 10 1
#d.initialize
def function2(a, b, c, start=d('a*b*c'), end=d('a+b+c+start')):
print a, start, end
function2(2,4,6)
>>> 2 48 60
# Notice that `end` does take the calculated value of `start` into
# account. The logic here is based on what you'd expect to happen
# with normal assignment if the names were each assigned a value
# sequentially: a is evaluated, then b, then c, etc...
I do feel guilty for doing this, especially with the way I resorted to using globals and other cheats. However, I think it works as you requested.
Unfortunately, you do have to write extra stuff (using a decorator and having to wrap keyword values in d('') ), but that was inevitable as Python doesn't support this natively.
Edit:
I worked on the sugary part of the syntax a bit. Shortened it down to a simple decorator function.
def initArgs(func):
#wraps(func)
def functionWrapper(*args, **kw):
argNames, defaults = inspect.getargspec(func)[::3]
for k in kw:
for i in argNames:
if k != i and ('_' + k) == i:
kw['_' + k] = kw[k]
del kw[k]
argValues = list(args) + [
kw[y] if y in kw else defaults[x]
for x,y in enumerate(argNames[len(args):])
]
oldGlobals = {}
for n,v in zip(argNames, argValues):
try:
oldGlobals[n] = globals()[n]
except:
oldGlobals[n] = None
if not n.startswith('_') or n in kw:
globals()[n] = v
else:
globals()[n] = kw[n] = eval(v)
for o,v in oldGlobals.items(): globals()[o] = v
return func(*args, **kw)
return functionWrapper
To use it:
# When using initArgs, the strings associated with the keyword arguments will
# get eval'd only if the name is preceded with an underscore(`_`). It's a
# bit strange and unpythonic for part of a name to have side effects, but then
# again name mangling works with double underscores (`__`) before methods.
# Example:
#initArgs
def function1(a, _start='a[-1]', _end='len(a)-1'):
print a, _start, _end
function1([1,2,8,10])
>>> [1, 2, 8, 10] 10 3
# Removing underscore (`_`) from start
#initArgs
def function2(a, start='a[-1]', _end='len(a)-1'):
print a, start, _end
function1([1,2,8,10])
>>> [1, 2, 8, 10] 'a[-1]' 3
# Outputs a string normally.
In the caller's frame, the arguments start and end can used with or without their underscores, so changing their names in the function definition at a later point wouldn't affect the caller. The only exception is within the function itself, where removing an underscore(_) would require doing the same everywhere else inside.

Accessing List from outside function

I have a list that I create inside of function1. I want to be able to access and modify it in function2. How can I do this without a global variable?
Neither function is nested within the other and I need to be able to generalize this for multiple lists in several functions.
I want to be able to access word_list and sentence_starter in other functions.
def Markov_begin(text):
print create_word_lists(text)
print pick_starting_point(word_list)
return starting_list
def create_word_lists(filename):
prefix_dict = {}
word_list = []
sub_list = []
word = ''
fin = open(filename)
for line in fin:
the_line = line.strip()
for i in line:
if i not in punctuation:
word+=(i)
if i in punctuation:
sub_list.append(word)
word_list.append(sub_list)
sub_list = []
word = ''
sub_list.append(word)
word_list.append(sub_list)
print 1
return word_list
def pick_starting_point(word_list):
sentence_starter = ['.','!','?']
starting_list = []
n = 0
for n in range(len(word_list)-1):
for i in word_list[n]:
for a in i:
if a in sentence_starter:
starting_list += word_list[n+1]
print 2
return starting_list
def create_prefix_dict(word_list,prefix_length):
while prefix > 0:
n = 0
while n < (len(word_list)-prefix):
key = str(''.join(word_list[n]))
if key in prefix_dict:
prefix_dict[key] += word_list[n+prefix]
else:
prefix_dict[key] = word_list[n+prefix]
n+=1
key = ''
prefix -=1
print Markov_begin('Reacher.txt')
You should refactor this as a class:
class MyWords(object):
def __init__(self):
self.word_list = ... #code to create word list
def pick_starting_point(self):
# do something with self.word_list
return ...
Usage
words = MyWords()
words.pick_starting_point()
...
You can simply use the list that first function creates as an argument of second function:
def some_list_function():
# this function would generate your list
return mylist
def some_other_function(mylist):
# this function takes a list as an argument and process it as you want
return result
some_other_function(some_list_function())
But if you need to use the list in multiple places (being processed by multiple functions) then storing it as a variable is not really a bad thing - even more, if your list generating function does some computing to generate the list, you're saving CPU by computing it only once.
If you do not want to a) use a global or b) return the list and pass it about, then you will have to use a class and hold your list in there.
The class route is best
Functions can have attribute values (For examples, see question 338101.)
In the current context, you could save and reference items like prefix_dict, word_list, sub_list, and word as individual attributes of whichever function computes them, as illustrated in the following example. However, use of a class, as suggested in other answers, is more likely to be understandable and maintainable in the long-term .
For example:
In [6]: def fun1(v):
fun1.li = range(v,8)
return v+1
...:
In [7]: def fun2(v):
fun2.li = range(v,12) + fun1.li
return v+2
...:
In [8]: fun1(3)
Out[8]: 4
In [9]: fun2(6)
Out[9]: 8
In [10]: fun2.li
Out[10]: [6, 7, 8, 9, 10, 11, 3, 4, 5, 6, 7]

Categories

Resources