When we use def, we can use **kwargs and *args to define dynamic inputs to the function
Is there anything similar for the return tuples, I've been looking for something that behaves like this:
def foo(data):
return 2,1
a,b=foo(5)
a=2
b=1
a=foo(5)
a=2
However if I only declare one value to unpack, it sends the whole tuple over there:
a=foo(5)
a=(2,1)
I could use 'if' statements, but I was wondering if there was something less cumbersome. I could also use some hold variable to store that value, but my return value might be kind of large to have just some place holder for that.
Thanks
If you need to fully generalize the return value, you could do something like this:
def function_that_could_return_anything(data):
# do stuff
return_args = ['list', 'of', 'return', 'values']
return_kwargs = {'dict': 0, 'of': 1, 'return': 2, 'values': 3}
return return_args, return_kwargs
a, b = function_that_could_return_anything(...)
for thing in a:
# do stuff
for item in b.items():
# do stuff
In my opinion it would be simpler to just return a dictionary, then access parameters with get():
dict_return_value = foo()
a = dict_return_value.get('key containing a', None)
if a:
# do stuff with a
I couldn't quite understand exactly what you're asking, so I'll take a couple guesses.
If you want to use a single value sometimes, consider a namedtuple:
from collections import namedtuple
AAndB = namedtuple('AAndB', 'a b')
def foo(data):
return AAndB(2,1)
# Unpacking all items.
a,b=foo(5)
# Using a single value.
foo(5).a
Or, if you're using Python 3.x, there's extended iterable unpacking to easily unpack only some of the values:
def foo(data):
return 3,2,1
a, *remainder = foo(5) # a==3, remainder==[2,1]
a, *remainder, c = foo(5) # a==3, remainder==[2], c==1
a, b, c, *remainder = foo(5) # a==3, b==2, c==1, remainder==[]
Sometimes the name _ is used to indicate that you are discarding the value:
a, *_ = foo(5)
Related
Given there is a list of function's names as strings, would it be possible to call corresponding function with random sampling from the list? Currently I am hard coding all the name by if statement so that
def a(x):
print(x)
def b(y):
print(y)
# there are more functions
func_list = ['a', 'b', ...] # all function's names are stored
chosen_function = random.choice(func_list)
if chosen_function == 'a':
a(1)
elif chosen_function == 'b':
b(2)
# elif continues...
I want to eliminates all if statements so that whenever I add new functions in func_list, I do not need to modify the if statements.
However, I can not contain the function itself in the list as the actual functions have randomness in them so I need to call it after the function is sampled.
The answer of #Jarvis is the right way to go, but for completeness sake, I'll show a different way: inspecting the global variables.
def a(x):
print(x)
def b(y):
print(y)
# there are more functions
func_list = ['a', 'b', ...] # all function's names are stored
chosen_function = random.choice(func_list)
globals()[chosen_function](x)
Why not use a dictionary?
def a(x):
print(x)
def b(y):
print(y)
func_dict = {'a': a, 'b': b}
Call it like this:
x = 3 # your int input
func_dict['a'](x) # calls a(x)
func_dict['b'](x) # calls b(x)
If you want to directly specify arguments, you can do so in the dictionary like this:
func_dict = {
'a': lambda: a(1),
'b': lambda: b(2)
}
Call the default methods like:
func_dict['a']() # calls a(1)
func_dict['b']() # calls b(2)
You may consider using eval function. In a nutshell, it evaluates a string into a Python snippet.
For example, in the below code, the eval function evaluates any as a string and interprets to python's any built-in function.
>>> eval('any')
>>> <built-in function any>
On similar grounds, you could evaluate the intended function from a string as below.
def function_a():
print('function A')
def function_b():
print('function B')
function_to_run = eval('function_b') # function_to_run is a callable now
function_to_run()
Result
function B
you can set the default value in the function itself
def a(x=1):
print(x)
def b(y=2):
print(y)
chosen_function = random.choice(func_list)
chosen_function()
The list.index(x) function returns the index in the list of the first item whose value is x.
Is there a function, list_func_index(), similar to the index() function that has a function, f(), as a parameter. The function, f() is run on every element, e, of the list until f(e) returns True. Then list_func_index() returns the index of e.
Codewise:
>>> def list_func_index(lst, func):
for i in range(len(lst)):
if func(lst[i]):
return i
raise ValueError('no element making func True')
>>> l = [8,10,4,5,7]
>>> def is_odd(x): return x % 2 != 0
>>> list_func_index(l,is_odd)
3
Is there a more elegant solution? (and a better name for the function)
You could do that in a one-liner using generators:
next(i for i,v in enumerate(l) if is_odd(v))
The nice thing about generators is that they only compute up to the requested amount. So requesting the first two indices is (almost) just as easy:
y = (i for i,v in enumerate(l) if is_odd(v))
x1 = next(y)
x2 = next(y)
Though, expect a StopIteration exception after the last index (that is how generators work). This is also convenient in your "take-first" approach, to know that no such value was found --- the list.index() function would throw ValueError here.
One possibility is the built-in enumerate function:
def index_of_first(lst, pred):
for i,v in enumerate(lst):
if pred(v):
return i
return None
It's typical to refer a function like the one you describe as a "predicate"; it returns true or false for some question. That's why I call it pred in my example.
I also think it would be better form to return None, since that's the real answer to the question. The caller can choose to explode on None, if required.
#Paul's accepted answer is best, but here's a little lateral-thinking variant, mostly for amusement and instruction purposes...:
>>> class X(object):
... def __init__(self, pred): self.pred = pred
... def __eq__(self, other): return self.pred(other)
...
>>> l = [8,10,4,5,7]
>>> def is_odd(x): return x % 2 != 0
...
>>> l.index(X(is_odd))
3
essentially, X's purpose is to change the meaning of "equality" from the normal one to "satisfies this predicate", thereby allowing the use of predicates in all kinds of situations that are defined as checking for equality -- for example, it would also let you code, instead of if any(is_odd(x) for x in l):, the shorter if X(is_odd) in l:, and so forth.
Worth using? Not when a more explicit approach like that taken by #Paul is just as handy (especially when changed to use the new, shiny built-in next function rather than the older, less appropriate .next method, as I suggest in a comment to that answer), but there are other situations where it (or other variants of the idea "tweak the meaning of equality", and maybe other comparators and/or hashing) may be appropriate. Mostly, worth knowing about the idea, to avoid having to invent it from scratch one day;-).
Not one single function, but you can do it pretty easily:
>>> test = lambda c: c == 'x'
>>> data = ['a', 'b', 'c', 'x', 'y', 'z', 'x']
>>> map(test, data).index(True)
3
>>>
If you don't want to evaluate the entire list at once you can use itertools, but it's not as pretty:
>>> from itertools import imap, ifilter
>>> from operator import itemgetter
>>> test = lambda c: c == 'x'
>>> data = ['a', 'b', 'c', 'x', 'y', 'z']
>>> ifilter(itemgetter(1), enumerate(imap(test, data))).next()[0]
3
>>>
Just using a generator expression is probably more readable than itertools though.
Note in Python3, map and filter return lazy iterators and you can just use:
from operator import itemgetter
test = lambda c: c == 'x'
data = ['a', 'b', 'c', 'x', 'y', 'z']
next(filter(itemgetter(1), enumerate(map(test, data))))[0] # 3
A variation on Alex's answer. This avoids having to type X every time you want to use is_odd or whichever predicate
>>> class X(object):
... def __init__(self, pred): self.pred = pred
... def __eq__(self, other): return self.pred(other)
...
>>> L = [8,10,4,5,7]
>>> is_odd = X(lambda x: x%2 != 0)
>>> L.index(is_odd)
3
>>> less_than_six = X(lambda x: x<6)
>>> L.index(less_than_six)
2
you could do this with a list-comprehension:
l = [8,10,4,5,7]
filterl = [a for a in l if a % 2 != 0]
Then filterl will return all members of the list fulfilling the expression a % 2 != 0. I would say a more elegant method...
Intuitive one-liner solution:
i = list(map(lambda value: value > 0, data)).index(True)
Explanation:
we use map function to create a list containing True or False based on if each element in our list pass the condition in the lambda or not.
then we convert the map output to list
then using the index function, we get the index of the first true which is the same index of the first value passing the condition.
Here is the way I come up with:
a = 'bats bear'
b = 'cats pear'
def sub_strings(a, b):
for s in [a, b]:
s = re.sub('\\b.ear\\b', '', s)
return a, b
a, b = sub_strings(a, b)
But that doesn't work at all, and the function still outputs the original strings ('bats bear', 'cats pear'). What's wrong with this approach?
s = re.sub('\\b.ear\\b', '', s)
does not do what you think it does. It merely rebinds the variable named s to the modified string returned by re.sub(). It does not alter the variables a nor b. You can check that by printing out the value of s in the loop.
Instead you can return a generator:
def sub_strings(a, b):
return (re.sub(r'\b.ear\b', '', s) for s in (a, b))
A list comprehension will also work:
def sub_strings(a, b):
return [re.sub(r'\b.ear\b', '', s) for s in (a, b)]
Either way, the result will be unpacked into the variables a and b as required.
You might want to consider generalising the function so that it accepts an arbitrary number of parameters:
def sub_strings(*args):
return (re.sub(r'\b.ear\b', '', s) for s in args)
Now you can call it with any number of arguments:
>>> print(list(sub_strings('bats bear', 'cats pear', 'rats hear')))
['bats ', 'cats ', 'rats ']
>>> print(list(sub_strings('bats bear', 'cats pear', 'rats hear', 'gnats rear')))
['bats ', 'cats ', 'rats ', 'gnats ']
The problem you are having is that in Python, strings (i.e., str type objects) are immutable objects. Because a string object cannot be changed, ANY function you perform on a string never changes the original string. It ALWAYS remains the same:
>>> s = 'abc'
>>> s.replace('abc', 'def') # perform some method on s
>>> print(s) # has s been changed?
abc # NOPE
If you want to get a manipulated version of your string, you have to save the manipulated version somewhere and return THAT. The other answers that have been provided show clearly how to do this.
As for your actual problem, I would suggest using a generator. A generator is a function that behaves very differently from a normal function. One of these differences is the generator function is capable of generating multiple results- one at a time- with only a single function call.
To make a generator, instead of using the word return, you use yield. Here is an example:
a = 'bats bear'
b = 'cats pear'
def sub_string_gen(*strings):
for s in strings:
yield re.sub('\\b.ear\\b', '', s)
a, b = sub_strings(a, b) # generator is "unpacked" here
Note that the *strings syntax allows the function to accept multiple arguments. The arguments are available inside your function under a list with the name strings.
The reason the above code works is that the last line auto-magically performs an UNPACKING of your executed generator. In other words, each result is yielded one at a time, and unpacked into the corresponding provided names one at a time.
Be careful, however, that you don't try to do THIS:
a = sub_strings(a) # BAD!
This will NOT work the way you expect. It will not work because a = sub_strings(a) does not unpack the generator; it instead creates a generator and assigns it to a; the generator has NOT been unpacked. Clarification on terminology: sub_strings is a generator function; sub_strings(a,b,c) creates a generator using that generator function.
To unpack the generator to a single name, do this instead:
a, = sub_strings(a) # Note the comma
The extra comma makes the a into a tuple of symbols instead of a singleton. This lets the interpreter know that you mean to "unpack" the generator into the lone symbol, a.
I like this syntax very much because it keeps you from making errors that are not always easy to see. For example, if you provide too many arguments to sub_strings but not enough variables, it will give you an error and let you know there is a problem:
>>> a, b = sub_strings(a, b, c) # extra c argument
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: too many values to unpack (expected 2)
Another way to use your generator is to simply stuff the results into a list, a tuple, or anything else that accepts an iterable object (generators are iterable):
>>> results = list(sub_strings(a, b, c, d, e, f))
There is also another very nice alternative syntax that does the same thing. Here we see that star again (some people call it the "splat"). The splat "unpacks" the generator one value at a time, much the same as it was automatically unpacked before:
>>> results = [*sub_strings(a, b, c, d, e, f)]
Lastly: you don't even have to define a function to make a generator. You can instead just use what is called a generator expression.
>>> a, b = (re.sub('\\b.ear\\b', '', s) for s in (a, b))
You can use such an expression in any of the places we used our generator above:
>>> results = list((re.sub('\\b.ear\\b', '', s) for s in (a, b)))
>>> results = [*(re.sub('\\b.ear\\b', '', s) for s in (a, b))]
Observe that the part that is called the generator expression replaced the generator function call- which creates a generator- in the previous versions of the code.
However, if your goal is a list an even shorter syntax is just to use what is called a list comprehension:
>>> results = [re.sub('\\b.ear\\b', '', s) for s in (a, b)]
There is much, MUCH more to learn about Python generators. Go here to get started.
Try this
a = 'bats bear'
b = 'cats pear'
def sub_strings(a, b):
result = []
for s in [a, b]:
result.append(re.sub('\\b.ear\\b', '', s) )
return result[0], result[1]
a, b = sub_strings(a, b)
The Dictionary __getitem__ method does not seem to work the same way as it does for List, and it is causing me headaches. Here is what I mean:
If I subclass list, I can overload __getitem__ as:
class myList(list):
def __getitem__(self,index):
if isinstance(index,int):
#do one thing
if isinstance(index,slice):
#do another thing
If I subclass dict, however, the __getitem__ does not expose index, but key instead as in:
class myDict(dict):
def __getitem__(self,key):
#Here I want to inspect the INDEX, but only have access to key!
So, my question is how can I intercept the index of a dict, instead of just the key?
Example use case:
a = myDict()
a['scalar'] = 1 # Create dictionary entry called 'scalar', and assign 1
a['vector_1'] = [1,2,3,4,5] # I want all subsequent vectors to be 5 long
a['vector_2'][[0,1,2]] = [1,2,3] # I want to intercept this and force vector_2 to be 5 long
print(a['vector_2'])
[1,2,3,0,0]
a['test'] # This should throw a KeyError
a['test'][[0,2,3]] # So should this
Dictionaries have no order; there is no index to pass in; this is why Python can use the same syntax ([..]) and the same magic method (__getitem__) for both lists and dictionaries.
When you index a dictionary on an integer like 0, the dictionary treats that like any other key:
>>> d = {'foo': 'bar', 0: 42}
>>> d.keys()
[0, 'foo']
>>> d[0]
42
>>> d['foo']
'bar'
Chained indexing applies to return values; the expression:
a['vector_2'][0, 1, 2]
is executed as:
_result = a['vector_2'] # via a.__getitem__('vector_2')
_result[0, 1, 2] # via _result.__getitem__((0, 1, 2))
so if you want values in your dictionary to behave in a certain way, you must return objects that support those operations.
Forgive me if this has been asked before. I did not know how to search for it.
I'm quite familiar with the following idiom:
def foo():
return [1,2,3]
[a,b,c] = foo()
(d,e,f) = foo()
wherein the values contained within the left hand side will be assigned based upon the values returned from the function on the right.
I also know you can do
def bar():
return {'a':1,'b':2,'c':3}
(one, two, three) = bar()
[four, five, six] = bar()
wherein the keys returned from the right hand side will be assigned to the containers on the left hand side.
However, I'm curious, is there a way to do the following in Python 2.6 or earlier:
{letterA:one, letterB:two, letterC:three} = bar()
and have it work in the same manner that it works for sequences to sequences? If not, why? Naively attempting to do this as I've written it will fail.
Dictionary items do not have an order, so while this works:
>>> def bar():
... return dict(a=1,b=2,c=3)
>>> bar()
{'a': 1, 'c': 3, 'b': 2}
>>> (lettera,one),(letterb,two),(letterc,three) = bar().items()
>>> lettera,one,letterb,two,letterc,three
('a', 1, 'c', 3, 'b', 2)
You can see that you can't necessarily predict how the variables will be assigned. You could use collections.OrderedDict in Python 3 to control this.
If you modify bar() to return a dict (as suggested by #mikerobi), you might want to still preserve keyed items that are in your existing dict. In this case, use update:
mydict = {}
mydict['existing_key'] = 100
def bar_that_says_dict():
return { 'new_key': 101 }
mydict.update(bar_that_says_dict())
print mydict
This should output a dict with both existing_key and new_key. If mydict had a key of new_key, then the update would overwrite it with the value returned from bar_that_says_dict.
No, if you can not change bar function, you could create a dict from the output pretty easily.
This is the most compact solution. But I would prefer to modify the bar function to return a dict.
dict(zip(['one', 'two', 'three'], bar()))