Keyword argument aliases in Python - python

I am creating a custom 3D space from scratch for some new coders, and I am worried that perhaps my documentation isn't beginner friendly. Here is an example:
def make_point(**kwargs):
X, Y, Z = 0, 0, 0;
if "Xcoord" in kwargs.keys(): X = kwargs["Xcoord"];
if "Ycoord" in kwargs.keys(): Y = kwargs["Ycoord"];
if "Zcoord" in kwargs.keys(): Z = kwargs["Zcoord"];
return tuple(X, Y, Z);
But the way I named the keyword arguments isn't very suitable for a new programmer with some knowledge in linear algebra, but it is necessary for me to keep track which variable is what. So in that manner I have vXcoord for the vector x coordinate, pXcoord for point, etc.
Is there a way to make keyword arguments more user friendly so if a user typed vX,vectorX or whatever it seems more logical it would still map to vXcoord?

The idea:
The idea is that every class could have a list of aliases to specific attributes so user could (based on class name logic: > point needs x, y, z, name attributes, dog needs breed, name, age, sex attributes and so on) based on its own internal logic to call attributes without need for knowing exactly what attribute name sed attribute have.
The logic:
If function or class has for input some keyword arguments, then I would need minimal list of common words associated with sed argument. Synonyms and idioms can be googled easily, but I would advise against the big list of synonyms, keep it small 2-3 + attribute name. Then all we need is to map those aliases to the original attribute since we as codes need to know how to call attribute without calling getattr(self, someattributestring)
Code:
Chronologically we must first define a function to generate aliases.
# Generate aliases for attributes
def generateAliases(*argListNames):
returningValues = [] # This could be omitted if the user wants to make a generator
la = returningValues.append # This could be omitted also
# Dominated argListNames
argListNames = map(str, argListNames) # For simplicity convert to strings
argListNames = map(str.lower, argListNames) # For simplicity convert to lower string
argListNames = list(argListNames) # Back to 'list'
# Small nameless lambda functions
getFirstChr = lambda element: element[0] # Getting first character
conectedJoing = lambda connector, item, args: connecter.join([ item, args if not __isTL__(args) else connecter.join(args) ]) # Generating joined string
# List of string convertors used to generate aliases
convertorList = [ lambda x: x, getFirstChr, str.title, str.upper, lambda x: getFirstChr(str.upper(x)) ]
for item in argListNames:
## Since we don't want an alias to repeat itself
listNoitem = filter(lambda x: x != item, argListNames)
listNoitem = list(listNoitem)
la(item) # If returningValues omitted, use the 'yield' statement
for conversion in convertorList: # #1 keeping up with for loops
converted = conversion(item)
for connecter in "_, ".split(","):
for listItem in listNoitem:
for cnvrt in convertorList: # #2 cnvrt is converted second stage: used to convert the whole list of items without the current alias
cList = cnvrt(listItem)
la(conectedJoing(connecter, converted, cList)) # If returningValues is omitted, use the 'yield' statement
la(conectedJoing(connecter, converted, listNoitem)) # if returningValues is omitted, use the 'yield' statement
# If the user wanted to make a generator, omit the next lines
returningValues = [ x.replace("_", "") if x.endswith("_") else x for x in returningValues ]
returningValues = sorted(set(returningValues))
return list(map(str, returningValues))
Now we need to map and check those arguments inside a function or class, so we need some argument parser.
## **kwargs argument parser, no error
def argumentParser(ApprovedSequence, **kwargs):
# ApprovedSequence is supposed to be a dictionary data type with {"original argument": generateAliases(originalArgumentName, somealias, somealias, ...)
"""
Phrases the keyword arguments,
for example: argumentParser(ApprovedSequence, someArgument=somevalue, otherArgument=othervalue ...)
Then it checks if someArgument is needed by checking in ApprovedSequence if name "someArgument" is found in the sequence:
If "someArgument" is found in ApprovedSequence, it stores returns the dictionary of DefaultKeys: Values,
for example: DefaultKey for someArgument: somevalue
input:
argumentParser(dict: ApprovedSequence, kwargs)
returns:
dictionary of found attributes and their values
!!important!! kwargs are not case sensitive in this case, so go crazy as long as you get the appropriate keyword!!
If you don't know what kind of keywords are needed for class,
just type className.errorAttributeNames().
For example, point.errorAttributeNames()
"""
if isinstance(ApprovedSequence, dict):
di = dict.items # dictionary.values(someDict)
dk = dict.keys # dictionary.keys(someDict)
# Managing the kwargs and approved sequence data
toLowerStr = lambda el: str(el).lower() # Conversion to lower string
asingKey = lambda el: [ key for key in dk(ApprovedSequence) if toLowerStr(el) in ApprovedSequence[key] ][0] # Assigning key
return { asingKey(k):v for k,v in di(kwargs) } # Dictionary comprehension
else:
raise TypeError("argumentPhraser function accepts only a dictionary for a ApprovedSequence, aka first item")
return None
Implementation
def somefunction(**kwargs):
aliases = {
"val1": generateAliases("first", "1"),
"val2": generateAliases("second", "2")
}
approved = argumentParser(aliases, **kwargs)
if "val1" in approved.keys(): val1 = approved["val1"]
else: val1 = 0 # Setting default value for val1
if "val2" in approved.keys(): val2 = approved["val2"]
else: val2 = 1 # Setting default value for val2
# Do something or your code here
return val1, val2
# For testing purposes
for x in [ {"first": 1}, {"second": 2, "first": 3}, {"f1": 4, "s2": 5}, {"f_1": 6, "2_s": 7} ]:
# Displaying inputed variables
form = ["passed "]
form += [ "{} as {} ".format(key, value) for key, value in x.items() ]
# Implementing somefunction
print("".join(form), somefunction(**x))
Output
python27 -m kwtest
Process started >>>
passed first as 1 (1, 1)
passed second as 2 first as 3 (3, 2)
passed f1 as 4 s2 as 5 (4, 5)
passed 2_s as 7 f_1 as 6 (6, 7)
<<< Process finished. (Exit code 0)
python35 -m kwtest
Process started >>>
passed first as 1 (1, 1)
passed first as 3 second as 2 (3, 2)
passed f1 as 4 s2 as 5 (4, 5)
passed f_1 as 6 2_s as 7 (6, 7)
<<< Process finished. (Exit code 0)
If implemented in classes, the process is similar in __init__, but __getitem__, __setitem__, and __delitem__ must be coded so they could search for attribute names in aliases as well. Also, attribute names could be generated with self.attributes = list(aliases.keys()) or something like that. Default values could be stored in classes with __kwdefaults__ or 'defaults' depending on what kind of data your function is using.
This code has been tested on Python 2.7 and Python 3.5 as you can see.
Further explanation if needed
You can define aliases within class global attributes or within __init__.
Explaining further __getitem__:
def __getitem__(self, item):
if item in self.aliases.keys():
return getattr(self, item)
if any(item in value for value in self.aliases.values()):
item = [ key for key in self.aliases.keys() if item in self.aliases[key] ] [0]
return getattr(self, item)
if item in range(len(self.aliases.keys())):
item = list(self.aliases.keys())[item]
return getattr(self, item)
Explaining further __setitem__:
def __setitem__(self, item, value):
item = self.__getitem__(self, item)
# ? must have a `__dict__` method or the class needs to be instanced from an object, like class someclass(object)
item = [ key for key in vars(self).items() if self[key] == item] [0]
if item != None:
setattr(self, item, value)

Related

A memoized function that takes a tuple of strings to return an integer?

Suppose I have arrays of tuples like so:
a = [('shape', 'rectangle'), ('fill', 'no'), ('size', 'huge')]
b = [('shape', 'rectangle'), ('fill', 'yes'), ('size', 'large')]
I am trying to turn these arrays into numerical vectors with each dimension representing a feature.
So the expected output we be something like:
amod = [1, 0, 1] # or [1, 1, 1]
bmod = [1, 1, 2] # or [1, 2, 2]
So the vector that gets created is dependent on what it has seen before (i.e rectangle is still coded as 1 but the new value 'large' gets coded as a next step up as 2).
I think I could use some combination of yield and a memoize function to help me with this. This is what I've tried so far:
def memoize(f):
memo = {}
def helper(x):
if x not in memo:
memo[x] = f(x)
return memo[x]
return helper
#memoize
def verbal_to_value(tup):
u = 1
if tup[0] == 'shape':
yield u
u += 1
if tup[0] == 'fill':
yield u
u += 1
if tup[0] == 'size':
yield u
u += 1
But I keep getting this error:
TypeError: 'NoneType' object is not callable
Is there a way I can create this function that has a memory of what it has seen? Bonus points if it could add keys dynamically so I don't have to hardcode things like 'shape' or 'fill'.
First off: this is my preferred implementation of the memoize
decorator, mostly because of speed ...
def memoize(f):
class memodict(dict):
__slots__ = ()
def __missing__(self, key):
self[key] = ret = f(key)
return ret
return memodict().__getitem__
except for some a few edge cases it has the same effect as yours:
def memoize(f):
memo = {}
def helper(x):
if x not in memo:
memo[x] = f(x)
#else:
# pass
return memo[x]
return helper
but is somewhat faster because the if x not in memo: happens in
native code instead of in python. To understand it you merely need
to know that under normal circumstances: to interpret adict[item]
python calls adict.__getitem__(key), if adict doesn't contain key,
__getitem__() calls adict.__missing__(key) so we can leverage the
python magic methods protocols for our gain...
#This the first idea I had how I would implement your
#verbal_to_value() using memoization:
from collections import defaultdict
work=defaultdict(set)
#memoize
def verbal_to_value(kv):
k, v = kv
aset = work[k] #work creates a new set, if not already created.
aset.add(v) #add value if not already added
return len(aset)
including the memoize decorator, that's 15 lines of code...
#test suite:
def vectorize(alist):
return [verbal_to_value(kv) for kv in alist]
a = [('shape', 'rectangle'), ('fill', 'no'), ('size', 'huge')]
b = [('shape', 'rectangle'), ('fill', 'yes'), ('size', 'large')]
print (vectorize(a)) #shows [1,1,1]
print (vectorize(b)) #shows [1,2,2]
defaultdict is a powerful object that has almost the same logic
as memoize: a standard dictionary in every way, except that when the
lookup fails, it runs the callback function to create the missing
value. In our case set()
Unfortunately this problem requires either access to the tupple that
is being used as the key, or to the dictionary state itself. With the
result that we cannot just write a simple function for .default_factory
But we can write a new object based on the memoize/defaultdict pattern:
#This how I would implement your verbal_to_value without
#memoization, though the worker class is so similar to #memoize,
#that it's easy to see why memoize is a good pattern to work from:
class sloter(dict):
__slots__ = ()
def __missing__(self,key):
self[key] = ret = len(self) + 1
#this + 1 bothers me, why can't these vectors be 0 based? ;)
return ret
from collections import defaultdict
work2 = defaultdict(sloter)
def verbal_to_value2(kv):
k, v = kv
return work2[k][v]
#~10 lines of code?
#test suite2:
def vectorize2(alist):
return [verbal_to_value2(kv) for kv in alist]
print (vectorize2(a)) #shows [1,1,1]
print (vectorize2(b)) #shows [1,2,2]
You might have seen something like sloter before, because it's
sometimes used for exactly this sort of situation. Converting member
names to numbers and back. Because of this, we have the advantage of
being able to reverse things like this:
def unvectorize2(a_vector, pattern=('shape','fill','size')):
reverser = [{v:k2 for k2,v in work2[k].items()} for k in pattern]
for index, vect in enumerate(a_vector):
yield pattern[index], reverser[index][vect]
print (list(unvectorize2(vectorize2(a))))
print (list(unvectorize2(vectorize2(b))))
But I saw those yields in your original post, and they've got me
thinking... what if there was a memoize / defaultdict like object
that could take a generator instead of a function and knew to just
advance the generator rather than calling it. Then I realized ...
that yes generators come with a callable called __next__() which
meant that we didn't need a new defaultdict implementation, just a
careful extraction of the correct member funtion...
def count(start=0): #same as: from itertools import count
while True:
yield start
start += 1
#so we could get the exact same behavior as above, (except faster)
#by saying:
sloter3=lambda :defaultdict(count(1).__next__)
#and then
work3 = defaultdict(sloter3)
#or just:
work3 = defaultdict(lambda :defaultdict(count(1).__next__))
#which yes, is a bit of a mindwarp if you've never needed to do that
#before.
#the outer defaultdict interprets the first item. Every time a new
#first item is received, the lambda is called, which creates a new
#count() generator (starting from 1), and passes it's .__next__ method
#to a new inner defaultdict.
def verbal_to_value3(kv):
k, v = kv
return work3[k][v]
#you *could* call that 8 lines of code, but we managed to use
#defaultdict twice, and didn't need to define it, so I wouldn't call
#it 'less complex' or anything.
#test suite3:
def vectorize3(alist):
return [verbal_to_value3(kv) for kv in alist]
print (vectorize3(a)) #shows [1,1,1]
print (vectorize3(b)) #shows [1,2,2]
#so yes, that can also work.
#and since the internal state in `work3` is stored in the exact same
#format, it be accessed the same way as `work2` to reconstruct input
#from output.
def unvectorize3(a_vector, pattern=('shape','fill','size')):
reverser = [{v:k2 for k2,v in work3[k].items()} for k in pattern]
for index, vect in enumerate(a_vector):
yield pattern[index], reverser[index][vect]
print (list(unvectorize3(vectorize3(a))))
print (list(unvectorize3(vectorize3(b))))
Final comments:
Each of these implementations suffer from storing state in a global
variable. Which I find anti-aesthetic but depending on what you're
planning to do with that vector later, that might be a feature. As I
demonstrated.
Edit:
Another day of meditating on this, and the sorts of situations where I might need it,
I think that I'd encapsulate this feature like this:
from collections import defaultdict
from itertools import count
class slotter4:
def __init__(self):
#keep track what order we expect to see keys
self.pattern = defaultdict(count(1).__next__)
#keep track of what values we've seen and what number we've assigned to mean them.
self.work = defaultdict(lambda :defaultdict(count(1).__next__))
def slot(self, kv, i=False):
"""used to be named verbal_to_value"""
k, v = kv
if i and i != self.pattern[k]:# keep track of order we saw initial keys
raise ValueError("Input fields out of order")
#in theory we could ignore this error, and just know
#that we're going to default to the field order we saw
#first. Or we could just not keep track, which might be
#required, if our code runs to slow, but then we cannot
#make pattern optional in .unvectorize()
return self.work[k][v]
def vectorize(self, alist):
return [self.slot(kv, i) for i, kv in enumerate(alist,1)]
#if we're not keeping track of field pattern, we could do this instead
#return [self.work[k][v] for k, v in alist]
def unvectorize(self, a_vector, pattern=None):
if pattern is None:
pattern = [k for k,v in sorted(self.pattern.items(), key=lambda a:a[1])]
reverser = [{v:k2 for k2,v in work3[k].items()} for k in pattern]
return [(pattern[index], reverser[index][vect])
for index, vect in enumerate(a_vector)]
#test suite4:
s = slotter4()
if __name__=='__main__':
Av = s.vectorize(a)
Bv = s.vectorize(b)
print (Av) #shows [1,1,1]
print (Bv) #shows [1,2,2]
print (s.unvectorize(Av))#shows a
print (s.unvectorize(Bv))#shows b
else:
#run the test silently, and only complain if something has broken
assert s.unvectorize(s.vectorize(a))==a
assert s.unvectorize(s.vectorize(b))==b
Good luck out there!
Not the best approach, but may help you to figure out a better solution
class Shape:
counter = {}
def to_tuple(self, tuples):
self.tuples = tuples
self._add()
l = []
for i,v in self.tuples:
l.append(self.counter[i][v])
return l
def _add(self):
for i,v in self.tuples:
if i in self.counter.keys():
if v not in self.counter[i]:
self.counter[i][v] = max(self.counter[i].values()) +1
else:
self.counter[i] = {v: 0}
a = [('shape', 'rectangle'), ('fill', 'no'), ('size', 'huge')]
b = [('shape', 'rectangle'), ('fill', 'yes'), ('size', 'large')]
s = Shape()
s.to_tuple(a)
s.to_tuple(b)

Nested dict keys as variable

There must be a more graceful way of doing this but I cannot figure out how to create a single function for reading/writing values to different levels of a dict, this is the 'best' that I could come up with:
table = {
'A': {
'B': '2',
'C': {
'D':'3'
}
}
}
first = 'A'
second1 = 'B'
second2 = 'C'
third = 'D'
def oneLevelDict(first):
x = table[first]
print(x)
def twoLevelDict(first, second):
x = table[first][second]
print(x)
def threeLevelDict(first, second, third):
x = table[first][second][third]
print(x)
oneLevelDict(first)
twoLevelDict(first, second1)
threeLevelDict(first, second2, third)
You can use *args to pass an arbitrary number of arguments to a function. You can then use a loop to traverse the levels.
get_any_level(*keys):
d = table
for key in keys:
d = d[key]
return d
Now you have one function that can replace the three you had before:
print(get_any_level(first))
print(get_any_level(first, second1))
print(get_any_level(first, second2, third))
You can use this function to write to an arbitrary level as well:
get_any_level(first)[second1] = 17
A better way might be to have a separate function to write though:
def put_any_level(value, *keys):
get_any_level(*keys[:-1])[keys[-1]] = value
put_any_level(17, first, second1)
value has to come first in the argument list unless you want it to be keyword-only because *keys will consume all positional arguments. This is not necessarily a bad alternative:
def put_any_level(*keys, value):
get_any_level(*keys[:-1])[keys[-1]] = value
The keyword argument adds clarity:
put_any_level(first, second1, value=17)
But it will also lead to an error if you attempt to pass it as a positional argument, e.g. put_any_level(first, second1, 17).
Couple of minor points:
It's conventional to use CamelCase only for class names. Variables and functions are conventionally written in lowercase_with_underscores.
A function should generally do one thing, and do it well. In this case, I've split the task of finding the nested value from the task of displaying it by giving the function a return value.
This can be achieved using *args. Read more about it here
And this is how to do it:
def allLevelDict(*argv):
if len(argv) == 1:
x = table[argv[0]]
print (x)
elif len(argv) == 2:
x = table[argv[0]][argv[1]]
print (x)
elif len(argv) == 3:
x = table[argv[0]][argv[1]][argv[2]]
print (x)
allLevelDict(first)
allLevelDict(first, second1)
allLevelDict(first, second2, third)
Similar to the other suggestions, but perhaps even more graceful, if you like recursion:
table = {'A':{'B':'2','C':{'D':'3'}}}
first = 'A'
second1 = 'B'
second2 = 'C'
third = 'D'
def get_from(x, *keys):
return get_from(x[keys[0]], *keys[1:]) if len(keys) > 0 else x
print(get_from(table, first))
print(get_from(table, first, second1))
print(get_from(table, first, second2, third))
Note: I'm also passing in the table, since I imagine you'd want to be able to use it on other dictionaries also.
Or, if you think shorter isn't always better:
def get_from(x, *keys):
if len(keys) > 0
return get_from(x[keys[0]], *keys[1:])
else:
return x
Normally, recursion can be dangerous, as it's expensive - but since you're unlikely to have incredibly deep dictionaries, I feel it is the right solution here.

Understanding the in operator in Python

index = []
def add_to_index(index,keyword,url):
if len(index) == 0:
index.append([keyword, [url]])
elif keyword in index:
find_key_pos = index.find(keyword)
index.insert(find_key_pos + len(keyword), url)
add_to_index(index,'udacity','http://udacity.com')
add_to_index(index,'udacity','http://npr.org')
print(index)
My output is:
[['udacity', ['http://udacity.com']]]
Actually the output has to be
[['udacity', ['http://udacity.com', 'http://npr.org']]
Whenever the keyword already exists in the index list, I just have to insert the url to the list that is next to the keyword.
In,
add_to_index(index,'udacity','http://udacity.com')
add_to_index(index,'udacity','http://npr.org')
The keyword 'udacity' is the same that is why I should add the different url's after that keyword.
Your bugs:
index.insert(find_key_pos + len(keyword), url)
The first parameter to list.insert() is the index for the new element. You actually only want to get the list for your keyword though and append a new URL to the nested list.
What you want instead is:
index[find_key_pos].append(url)
Second bug lies in the re-use of the index variable. Your function parameter is shadowing the list from the parent scope. Use different names. Your code will work, because lists are mutable and you are passing around references to the same list, but it will create a hella lot of confusion down the road.
But what you should really do is you should look up Python dictionaries. They offer the keyword functionality out of the box.
Here's a small dict wrapper that will make your life easier:
class ListDict():
def __init__(self):
self.index = ()
def addEntry(self, key, entry):
if key in self.index:
self.index[key].append(entry)
else:
self.index[key] = [entry]
def getEntries(self, key):
if key in self.index:
return self.index[key]
else:
return []
Usage:
websiteUrls = ListDict()
websiteUrls.addEntry("udemy", "foo")
websiteUrls.addEntry("udemy", "bar")
websiteUrls.getEntries("udemy")
# ["foo", "bar"]
websiteUrls.getEntries("nope")
# []

Capturing the external modification of a mutable python object serving as an instance class variable

I am trying to track the external modification of entries of a mutable python object (e.g., a list tor dictionary). This ability is particularly helpful in the following two situations:
1) When one would like to avoid the assignment of unwanted values to the mutable python object. Here's a simple example where x must be a list of integers only:
class foo(object):
def __init__(self,x):
self.x = x
def __setattr__(self,attr_name,attr_value):
# x must be a list of integers only
if attr_name == 'x' and not isinstance(attr_value,list):
raise TypeError('x must be a list!')
elif attr_name == 'x' and len([a for a in attr_value if not isinstance(a,int)]) > 0:
raise TypeError('x must be a list of integers only')
self.__dict__[attr_name] = attr_value
# The following works fine and it throws an error because x has a non-integer entry
f = foo(x = ['1',2,3])
# The following assigns an authorized list to x
f = foo(x = [1,2,3])
# However, the following does not throw any error.
#** I'd like my code to throw an error whenever a non-integer value is assigned to an element of x
f.x[0] = '1'
print 'f.x = ',f.x
2) When one needs to update a number of other variables after modifying the mutable Python object. Here's an example, where x is a dictionary and x_vals needs to get updated whenever any changes (such as deleting an entry or assigning a new value for a particular key) are made to x :
class foo(object):
def __init__(self,x,y = None):
self.set_x(x)
self.y = y
def set_x(self,x):
"""
x has to be a dictionary
"""
if not isinstance(x,dict):
raise TypeError('x must be a dicitonary')
self.__dict__['x'] = x
self.find_x_vals()
def find_x_vals(self):
"""
NOTE: self.x_vals needs to get updated each time one modifies x
"""
self.x_vals = self.x.values()
def __setattr__(self,name,value):
# Any Changes made to x --> NOT SURE HOW TO CODE THIS PART! #
if name == 'x' or ...:
raise AttributeError('Use set_x to make changes to x!')
else:
self.__dict__[name] = value
if __name__ == '__main__':
f = foo(x={'a':1, 'b':2, 'c':3}, y = True)
print f.x_vals
# I'd like this to throw an error asking to use set_x so self.x_vals
# gets updated too
f.x['a'] = 5
# checks if x_vals was updated
print f.x_vals
# I'd like this to throw an error asking to use set_x so self.x_vals gets updated too
del f.x['a']
print f.x_vals
You could make x_vals a property like that:
#property
def x_vals(self):
return self.x.values()
And it would keep x_vals up to date each time you access it. It would event be faster because you wouldn't have to update it each time you change x.
If your only problem is keeping x_vals up to date, it's going to solve it, and save you the hassle of subclassing stuff.
You cannot use property because the thing you are trying to protect is mutable, and property only helps with the geting, seting, and deleteing of the object itself, not that objects internal state.
What you could do is create a dict subclass (or just a look-a-like if you only need a couple of the dict abilities) to manage access. Then your custom class could manage the __getitem__, __setitem__, and __delitem__ methods.
Update for question revision
My original answer is still valid -- whether you use property or __getattribute__1 you still have the basic problem: once you hand over the retrieved attribute you have no control over what happens to it nor what it does.
You have two options to work around this:
create subclasses of the classes you want to protect, and put the restrictions in them (from my original answer), or
create a generic wrapper to act as a gateway.
A very rough example of the gateway wrapper:
class Gateway():
"use this to wrap an object and provide restrictions to it's data"
def __init__(self, obj, valid_key=None, valid_value=None):
self.obj = obj
self.valid_key = valid_key
self.valid_value = valid_value
def __setitem__(self, name, value):
"""
a dictionary can have any value for name, any value for value
a list will have an integer for name, any value for value
"""
valid_key = self.valid_key
valid_value = self.valid_value
if valid_key is not None:
if not valid_key(name):
raise Exception('%r not allowed as key/index' % type(name))
if valid_value is not None:
if not valid_value(value):
raise Exception('%r not allowed as value' % value)
self.obj[name] = value
and a simple example:
huh = Gateway([1, 2, 3], valid_value=lambda x: isinstance(x, int))
huh[0] = '1'
Traceback (most recent call last):
...
Exception: '1' not allowed as value
To use Gateway you will need to override more methods, such as append (for list).
1 Using __getattribute__ is not advised as it is the piece that controls all the aspects of attribute lookup. It is easy to get wrong.

Ignore python multiple return value

Say I have a Python function that returns multiple values in a tuple:
def func():
return 1, 2
Is there a nice way to ignore one of the results rather than just assigning to a temporary variable? Say if I was only interested in the first value, is there a better way than this:
x, temp = func()
You can use x = func()[0] to return the first value, x = func()[1] to return the second, and so on.
If you want to get multiple values at a time, use something like x, y = func()[2:4].
One common convention is to use a "_" as a variable name for the elements of the tuple you wish to ignore. For instance:
def f():
return 1, 2, 3
_, _, x = f()
If you're using Python 3, you can you use the star before a variable (on the left side of an assignment) to have it be a list in unpacking.
# Example 1: a is 1 and b is [2, 3]
a, *b = [1, 2, 3]
# Example 2: a is 1, b is [2, 3], and c is 4
a, *b, c = [1, 2, 3, 4]
# Example 3: b is [1, 2] and c is 3
*b, c = [1, 2, 3]
# Example 4: a is 1 and b is []
a, *b = [1]
The common practice is to use the dummy variable _ (single underscore), as many have indicated here before.
However, to avoid collisions with other uses of that variable name (see this response) it might be a better practice to use __ (double underscore) instead as a throwaway variable, as pointed by ncoghlan. E.g.:
x, __ = func()
Remember, when you return more than one item, you're really returning a tuple. So you can do things like this:
def func():
return 1, 2
print func()[0] # prints 1
print func()[1] # prints 2
The best solution probably is to name things instead of returning meaningless tuples (unless there is some logic behind the order of the returned items). You can for example use a dictionary:
def func():
return {'lat': 1, 'lng': 2}
latitude = func()['lat']
You could even use namedtuple if you want to add extra information about what you are returning (it's not just a dictionary, it's a pair of coordinates):
from collections import namedtuple
Coordinates = namedtuple('Coordinates', ['lat', 'lng'])
def func():
return Coordinates(lat=1, lng=2)
latitude = func().lat
If the objects within your dictionary/tuple are strongly tied together then it may be a good idea to even define a class for it. That way you'll also be able to define more complex operations. A natural question that follows is: When should I be using classes in Python?
Most recent versions of python (≥ 3.7) have dataclasses which you can use to define classes with very few lines of code:
from dataclasses import dataclass
#dataclass
class Coordinates:
lat: float = 0
lng: float = 0
def func():
return Coordinates(lat=1, lng=2)
latitude = func().lat
The primary advantage of dataclasses over namedtuple is that its easier to extend, but there are other differences. Note that by default, dataclasses are mutable, but you can use #dataclass(frozen=True) instead of #dataclass to force them being immutable.
Here is a video that might help you pick the right data class for your use case.
Three simple choices.
Obvious
x, _ = func()
x, junk = func()
Hideous
x = func()[0]
And there are ways to do this with a decorator.
def val0( aFunc ):
def pick0( *args, **kw ):
return aFunc(*args,**kw)[0]
return pick0
func0= val0(func)
This seems like the best choice to me:
val1, val2, ignored1, ignored2 = some_function()
It's not cryptic or ugly (like the func()[index] method), and clearly states your purpose.
If this is a function that you use all the time but always discard the second argument, I would argue that it is less messy to create an alias for the function without the second return value using lambda.
def func():
return 1, 2
func_ = lambda: func()[0]
func_() # Prints 1
This is not a direct answer to the question. Rather it answers this question: "How do I choose a specific function output from many possible options?".
If you are able to write the function (ie, it is not in a library you cannot modify), then add an input argument that indicates what you want out of the function. Make it a named argument with a default value so in the "common case" you don't even have to specify it.
def fancy_function( arg1, arg2, return_type=1 ):
ret_val = None
if( 1 == return_type ):
ret_val = arg1 + arg2
elif( 2 == return_type ):
ret_val = [ arg1, arg2, arg1 * arg2 ]
else:
ret_val = ( arg1, arg2, arg1 + arg2, arg1 * arg2 )
return( ret_val )
This method gives the function "advanced warning" regarding the desired output. Consequently it can skip unneeded processing and only do the work necessary to get your desired output. Also because Python does dynamic typing, the return type can change. Notice how the example returns a scalar, a list or a tuple... whatever you like!
When you have many output from a function and you don't want to call it multiple times, I think the clearest way for selecting the results would be :
results = fct()
a,b = [results[i] for i in list_of_index]
As a minimum working example, also demonstrating that the function is called only once :
def fct(a):
b=a*2
c=a+2
d=a+b
e=b*2
f=a*a
print("fct called")
return[a,b,c,d,e,f]
results=fct(3)
> fct called
x,y = [results[i] for i in [1,4]]
And the values are as expected :
results
> [3,6,5,9,12,9]
x
> 6
y
> 12
For convenience, Python list indexes can also be used :
x,y = [results[i] for i in [0,-2]]
Returns : a = 3 and b = 12
It is possible to ignore every variable except the first with less syntax if you like. If we take your example,
# The function you are calling.
def func():
return 1, 2
# You seem to only be interested in the first output.
x, temp = func()
I have found the following to works,
x, *_ = func()
This approach "unpacks" with * all other variables into a "throwaway" variable _. This has the benefit of assigning the one variable you want and ignoring all variables behind it.
However, in many cases you may want an output that is not the first output of the function. In these cases, it is probably best to indicate this by using the func()[i] where i is the index location of the output you desire. In your case,
# i == 0 because of zero-index.
x = func()[0]
As a side note, if you want to get fancy in Python 3, you could do something like this,
# This works the other way around.
*_, y = func()
Your function only outputs two potential variables, so this does not look too powerful until you have a case like this,
def func():
return 1, 2, 3, 4
# I only want the first and last.
x, *_, d = func()

Categories

Resources