alist = []
def show(*args, **kwargs):
alist.append(*args, **kwargs)
print(alist)
>>> show('tiger')
['tiger']
>>> show('tiger','cat')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in show
TypeError: append() takes exactly one argument (2 given)
>>> show('tiger','cat', {'name':'tom'})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in show
TypeError: append() takes exactly one argument (3 given)
Since the method append of alist only accepts one argument, why not detect a syntax error on the line alist.append(*args, **kwargs) in the definition of the method show?
It's not a syntax error because the syntax is perfectly fine and that function may or may not raise an error depending on how you call it.
The way you're calling it:
alist = []
def show(*args, **kwargs):
alist.append(*args, **kwargs)
print(alist)
>>> show('tiger')
['tiger']
>>> show('tiger','cat')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in show
TypeError: append() takes exactly one argument (2 given)
A different way:
alist = []
def show(*args, **kwargs):
alist.append(*args, **kwargs)
print(alist)
>>> show('tiger')
['tiger', 'tiger']
>>> class L: pass
...
>>> alist = L()
>>> alist.append = print
>>> show('tiger','cat')
tiger cat
<__main__.L object at 0x000000A45DBCC048>
Python objects are strongly typed. The names that bind to them are not. Nor are function arguments. Given Python's dynamic nature it would be extremely difficult to statically predict what type a variable at a given source location will be at execution time, so the general rule is that Python doesn't bother trying.
In your specific example, alist is not in the local scope. Therefore it can be modified after your function definition was executed and the changes will be visible to your function, cf. code snippets below.
So, in accord with the general rule: predicting whether or not alist will be a list when you call .append? Near-impossible. In particular, the interpreter cannot predict that this will be an error.
Here is some code just to drive home the point that static type checking is by all practical means impossible in Python. It uses non-local variables as in your example.
funcs = []
for a in [1, "x", [2]]:
def b():
def f():
print(a)
return f
funcs.append(b())
for f in funcs:
f()
Output:
[2] # value of a at definition time (of f): 1
[2] # value of a at definition time (of f): 'x'
[2] # value of a at definition time (of f): [2]
And similarly for non-global non-local variables:
funcs = []
for a in [1, "x", [2]]:
def b(a):
def f():
print(a)
a = a+a
return f
funcs.append(b(a))
for f in funcs:
f()
Output:
2 # value of a at definition time (of f): 1
xx # value of a at definition time (of f): 'x'
[2, 2] # value of a at definition time (of f): [2]
It's not a syntax error because it's resolved at runtime. Syntax errors are caught initially during parsing. Things like unmatched brackets, undefined variable names, missing arguments (this is not a missing argument *args means any number of arguments).
show has no way of knowing what you'll pass it at runtime and since you are expanding your args variable inside show, there could be any number of arguments coming in and it's valid syntax! list.append takes one argument! One tuple, one list, one int, string, custom class etc. etc. What you are passing it is some number elements depending on input. If you remove the * it's all dandy as its one element e.g. alist.append(args).
All this means that your show function is faulty. It is equipped to handle args only when its of length 1. If its 0 you also get a TypeError at the point append is called. If its more than that its broken, but you wont know until you run it with the bad input.
You could loop over the elements in args (and kwargs) and add them one by one.
alist = []
def show(*args, **kwargs):
for a in args:
alist.append(a)
for kv in kwargs.items():
alist.append(kv)
print(alist)
Related
I have a function which goes something like this:
def do_something(lis):
do something
return lis[0], lis[1]
and another function which needs to take those two return objects as arguments:
def other_function(arg1, arg2):
pass
I have tried:
other_function(do_something(lis))
but incurred this error:
TypeError: other_function() missing 1 required positional argument: 'arg2'
You need to unpack those arguments when calling other_function.
other_function(*do_something(lis))
Based on the error message, it looks like your other function is defined (and should be defined as)
def other_function(arg1, arg2):
pass
So, when you return from do_something, you are actually returning a tuple containing (lis[0], lis[1]). So when you originally called other_function, you passing a single tuple, when your other_function was still expecting a second argument.
You can see this if you break it down a bit further. Below is a breakdown of how the returns look like when handled differently, a reproduction of your error and a demo of the solution:
Returning in to a single variable will return a tuple of the result:
>>> def foo():
... lis = range(10)
... return lis[1], lis[2]
...
>>> result = foo()
>>> result
(1, 2)
Returning in to two variables, unpacks in to each var:
>>> res1, res2 = foo()
>>> res1
1
>>> res2
2
Trying to call other_function with result which now only holds a tuple of your result:
>>> def other_function(arg1, arg2):
... print(arg1, arg2)
...
>>> other_function(result)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: other_function() missing 1 required positional argument: 'arg2'
Calling other_function with res1, res2 that holds each value of the return from foo:
>>> other_function(res1, res2)
1 2
Using result (your tuple result) and unpacking in your function call to other_function:
>>> other_function(*result)
1 2
You can do it like this:
other_function(*do_something(list))
The * char will expand the tuple returned by do_something.
Your do_something function is actually returning a tuple which contains several values but is only one value itself.
See the doc for more details.
DefaultDicts are useful objects to be able to have a dictionary that can create new keys on the fly with a callable function used to define the default value. eg. Using str to make an empty string the default.
>>> food = defaultdict(str)
>>> food['apple']
''
You can also use lambda to make an expression be the default value.
>>> food = defaultdict(lambda: "No food")
>>> food['apple']
'No food'
However you can't pass any parameters to this lambda function, that causes an error when it tries to be called, since you can't actually pass a parameter to the function.
>>> food = defaultdict(lambda x: "{} food".format(x))
>>> food['apple']
Traceback (most recent call last):
File "<pyshell#9>", line 1, in <module>
food['apple']
TypeError: <lambda>() takes exactly 1 argument (0 given)
Even if you try to supply the parameter
>>> food['apple'](12)
Traceback (most recent call last):
File "<pyshell#9>", line 1, in <module>
food['apple']
TypeError: <lambda>() takes exactly 1 argument (0 given)
How could these lambda functions be responsive rather than a rigid expression?
Using a variable in the expression can actually circumvent this somewhat.
>>> from collections import defaultdict
>>> baseLevel = 0
>>> food = defaultdict(lambda: baseLevel)
>>> food['banana']
0
>>> baseLevel += 10
>>> food['apple']
10
>>> food['banana']
0
The default lambda expression is tied to a variable that can change without affecting the other keys its already created. This is particularly useful when it can be tied to other functions that only evaluate when a non existant key is being accessed.
>>> joinTime = defaultdict(lambda: time.time())
>>> joinTime['Steven']
1432137137.774
>>> joinTime['Catherine']
1432137144.704
>>> for customer in joinTime:
print customer, joinTime[customer]
Catherine 1432137144.7
Steven 1432137137.77
Ugly but may be useful to someone:
class MyDefaultDict(defaultdict):
def __init__(self, func):
super(self.__class__, self).__init__(self._func)
self.func = func
def _func(self):
return self.func(self.cur_key)
def __getitem__(self, key):
self.cur_key = key
return super().__getitem__(self.cur_key)
This question already has answers here:
Creating functions (or lambdas) in a loop (or comprehension)
(6 answers)
Closed 8 years ago.
I'm running Python 3.4.2, and I'm confused at the behavior of my code. I'm trying to create a list of callable polynomial functions with increasing degree:
bases = [lambda x: x**i for i in range(3)]
But for some reason it does this:
print([b(5) for b in bases])
# [25, 25, 25]
Why is bases seemingly a list of the last lambda expression, in the list comprehension, repeated?
The problem, which is a classic
"gotcha", is
that the i referenced in the lambda functions is not looked up until the
lambda function is called. At that time, the value of i is the last value it
was bound to when the for-loop ended, i.e. 2.
If you bind i to a default value in the definition of the lambda functions, then each i becomes a local variable, and its default value is evaluated and bound to the function at the time the lambda is defined rather than called.
Thus, when the lambda is called, i is now looked up in the local scope, and its default value is used:
In [177]: bases = [lambda x, i=i: x**i for i in range(3)]
In [178]: print([b(5) for b in bases])
[1, 5, 25]
For reference:
Python scopes and namespaces
As an alternate solution, you could use a partial function:
>>> bases = [(lambda i: lambda x: x**i)(i) for i in range(3)]
>>> print([b(5) for b in bases])
[1, 5, 25]
The only advantage of that construction over the classic solution given by #unutbu is that way, you cannot introduce sneaky bugs by calling your function with the wrong number of arguments:
>>> print([b(5, 8) for b in bases])
# ^^^
# oups
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in <listcomp>
TypeError: <lambda>() takes 1 positional argument but 2 were given
As suggested by Adam Smith in a comment bellow, instead of using "nested lambda" you could use functools.partial with the same benefit:
>>> import functools
>>> bases = [functools.partial(lambda i,x: x**i,i) for i in range(3)]
>>> print([b(5) for b in bases])
[1, 5, 25]
>>> print([b(5, 8) for b in bases])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in <listcomp>
TypeError: <lambda>() takes 2 positional arguments but 3 were given
a more 'pythonic' approach:
using nested functions:
def polyGen(degree):
def degPolynom(n):
return n**degree
return degPolynom
polynoms = [polyGen(i) for i in range(5)]
[pol(5) for pol in polynoms]
output:
>> [1, 5, 25, 125, 625]
I don't think the "why this happens" aspect of the question has been answered yet.
The reason that names non-local names in a function are not considered constants is so that these non-local names will match the behaviour of global names. That is, changes to a global name after a function is created are observed when the function is called.
eg.
# global context
n = 1
def f():
return n
n = 2
assert f() == 2
# non-local context
def f():
n = 1
def g():
return n
n = 2
assert g() == 2
return g
assert f()() == 2
You can see that in both the global and non-local contexts that if the value of a name is changed, then that change is reflected in future invocations of the function that references the name. If globals and non-locals were treated differently then that would be confusing. Thus, the behaviour is made consistent. If you need the current value of a name to made constant for a new function then the idiomatic way is to delegate the creation of the function to another function. The function is created in the creating-function's scope (where nothing changes), and thus the value of the name will not change.
eg.
def create_constant_getter(constant):
def constant_getter():
return constant
return constant_getter
getters = [create_constant_getter(n) for n in range(5)]
constants = [f() for f in getters]
assert constants == [0, 1, 2, 3, 4]
Finally, as an addendum, functions can modify non-local names (if the name is marked as such) just as they can modify global names. eg.
def f():
n = 0
def increment():
nonlocal n
n += 1
return n
return increment
g = f()
assert g() + 1 == g()
One of my coworkers was using the builtin max function (on Python 2.7), and he found a weird behavior.
By mistake, instead of using the keyword argument key (as in key=lambda n: n) to pre-sort the list passed as a parameter, he did:
>>> max([1,2,3,3], lambda n : n)
[1, 2, 3, 3]
He was doing what in the documentation is explained as:
If two or more positional arguments are provided, the largest of the positional arguments is returned., so now I'm curious about why this happens:
>>> (lambda n:n) < []
True
>>> def hello():
... pass
...
>>> hello < []
True
>>> len(hello)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: object of type 'function' has no len()
I know it's not a big deal, but I'd appreciate if any of the stackoverflowers could explain how those comparisons are internally made (or point me into a direction where I can find that information). :-)
Thank you in advance!
Python 2 orders objects of different types rather arbitrarily. It did this to make lists always sortable, whatever the contents. Which direction that comparison comes out as is really not of importance, just that one always wins. As it happens, the C implementation falls back to comparing type names; lambda's type name is function, which sorts before list.
In Python 3, your code would raise an exception instead:
>>> (lambda n: n) < []
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unorderable types: function() < list()
because, as you found out, supporting arbitrary comparisons mostly leads to hard-to-crack bugs.
Everything in Python (2) can be compared, but some are fairly nonsensical, as you've seen.
>>> (lambda n:n) < []
True
Python 3 resolves this, and produces exceptions instead.
I have a program as follows:
a=reader.next()
if *some condition holds*:
#Do some processing and continue the iteration
else:
#Append the variable a back to the iterator
#That is nullify the operation *a=reader.next()*
How do I add an element to the start of the iterator?
(Or is there an easier way to do this?)
EDIT: OK let me put it this way. I need the next element in an iterator without removing it.
How do I do this>?
You're looking for itertools.chain:
import itertools
values = iter([1,2,3]) # the iterator
value = 0 # the value to prepend to the iterator
together = itertools.chain([value], values) # there it is
list(together)
# -> [0, 1, 2, 3]
Python iterators, as such, have very limited functionality -- no "appending" or anything like that. You'll need to wrap the generic iterator in a wrapper adding that functionality. E.g.:
class Wrapper(object):
def __init__(self, it):
self.it = it
self.pushedback = []
def __iter__(self):
return self
def next(self):
if self.pushedback:
return self.pushedback.pop()
else:
return self.it.next()
def pushback(self, val):
self.pushedback.append(val)
This is Python 2.5 (should work in 2.6 too) -- slight variants advised for 2.6 and mandatory for 3.any (use next(self.it) instead of self.it.next() and define __next__ instead of next).
Edit: the OP now says what they need is "peek ahead without consuming". Wrapping is still the best option, but an alternative is:
import itertools
...
o, peek = itertools.tee(o)
if isneat(peek.next()): ...
this doesn't advance o (remember to advance it if and when you decide you DO want to;-).
By design (in general development concepts) iterators are intended to be read-only, and any attempt to change them would break.
Alternatively, you could read the iterator backwards, and add it to the end of hte element (which is actually the start :) )?
This isn't too close what you asked for, but if you have control over the generator and you don't need to "peek" before the value is generated (and any side effects have occurred), you can use the generator.send method to tell the generator to repeat the last value it yielded:
>>> def a():
... for x in (1,2,3):
... rcvd = yield x
... if rcvd is not None:
... yield x
...
>>> gen = a()
>>> gen.send("just checking")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can't send non-None value to a just-started generator
>>> gen.next()
1
>>> gen.send("just checking")
1
>>> gen.next()
2
>>> gen.next()
3
>>> gen.next()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration