Python order in which functions in print statement are called? - python

Let's say I have
def foo(n):
print("foo",n)
def bar(n):
print("bar",n)
print("Hello",foo(1),bar(1))
I would expect the output to be:
Hello
foo 1 None
bar 1 None
But instead I get something which surprised me:
foo 1
bar 1
Hello None None
Why does Python call the functions first before printing the "Hello"? It seems like it would make more sense to print "Hello", then call foo(1), have it print its output, and then print "None" as it's return type. Then call bar(1) and print that output, and print "None" as it's return type. Is there a reason Python (or maybe other languages) call the functions in this way instead of executing each argument in the order they appear?
Edit: Now, my followup question is what's happening internally with Python somehow temporarily storing return values of each argument if it's evaluating the expressions left to right? For example, now I understand it will evaluate each expression left to right, but the final line says Hello None None, so is Python somehow remembering from the execution of each function that the second argument and third arguments have a return value of None? For example, when evaluating foo(), it will print foo 1 and then hit no return statement, so is it storing in memory that foo didn't return a value?

Quoting from the documentation:
Python evaluates expressions from left to right. Notice that while evaluating an assignment, the right-hand side is evaluated before the left-hand side.
Bold emphasis mine. So, all expressions are first evaluated and then passed to print.
Observe the byte code for the print call:
1 0 LOAD_NAME 0 (print)
3 LOAD_CONST 0 ('Hello')
6 LOAD_NAME 1 (foo)
9 LOAD_CONST 1 (1)
12 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
15 LOAD_NAME 2 (bar)
18 LOAD_CONST 1 (1)
21 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
24 CALL_FUNCTION 3 (3 positional, 0 keyword pair)
27 RETURN_VALUE
foo (LINE 12) and bar (LINE 21) are first called, followed by print (LINE 24 - 3 positional args).
As to the question of where these intermediate computed values are stored, that would be the call stack. print accesses the return values simply by poping them off of the stack. - Christian Dean

As is specified in the documentation:
Python evaluates expressions from left to right. Notice that while evaluating an assignment, the right-hand side is evaluated before the left-hand side.
This thus means that if you write:
print("Hello",foo(1),bar(1))
It is equivalent to:
arg1 = "Hello"
arg2 = foo(1)
arg3 = bar(1)
print(arg1,arg2,arg3)
So the arguments are evaluated before the function call.
This also happens when we for instance have a tree:
def foo(*x):
print(x)
return x
print(foo(foo('a'),foo('b')),foo(foo('c'),foo('d')))
This prints as:
>>> print(foo(foo('a'),foo('b')),foo(foo('c'),foo('d')))
('a',)
('b',)
(('a',), ('b',))
('c',)
('d',)
(('c',), ('d',))
(('a',), ('b',)) (('c',), ('d',))
Since Python thus evaluates arguments left-to-right. It will first evaluate foo(foo('a'),foo('b')), but in order to evaluate foo(foo('a'),foo('b')), it first needs to evaluate foo('a'), followed by foo('b'). Then it can all foo(foo('a'),foo('b')) with the results of the previous calls.
Then it wants to evaluate the second argument foo(foo('c'),foo('d')). But in order to do this, it thus first evaluates foo('c') and foo('d'). Next it can evaluate foo(foo('c'),foo('d')), and then finally it can evaluate the final expression: print(foo(foo('a'),foo('b')),foo(foo('c'),foo('d'))).
So the evaluation is equivalent to:
arg11 = foo('a')
arg12 = foo('b')
arg1 = foo(arg11,arg12)
arg21 = foo('c')
arg22 = foo('d')
arg2 = foo(arg11,arg12)
print(arg1,arg2)

The enclosing function is not called until all of its arguments have been evaluated. This is consistent with the basic rules of mathematics that state that operations within parentheses are performed before those outside. As such print() will always happen after both foo() and bar().

The answer is simple:
In python the arguments of a function like print are always first evaluated left to right.
Take a look at this stackoverflow question: In which order is an if statement evaluated in Python
And None is just the return value of the function. It executes the function first and then print its return value

Related

Why does a decorator function have a return value?

Consider the following example.
def decorator(function_to_decorate):
def wrapper():
print('Entering', function_to_decorate.__name__)
function_to_decorate()
print('Exiting', function_to_decorate.__name__)
return wrapper
#decorator
def func():
print("Original function.")
func()
As the #decorator syntax is just the shorthand for func = my_decorator(func), it is logical that my_decorator must return something. My question is: why are decorators defined in this way and not without return value: my_decorator(func)? What is the purpose of returning the wrapper function wrapper?
EDIT
How does a decorator do more than a simple wrapper?
def wrapper(function_to_decorate):
print('Entering', function_to_decorate.__name__)
function_to_decorate()
print('Exiting', function_to_decorate.__name__)
def func():
print("Original function.")
wrapper(func)
Imagine if you could apply a decorator to a regular variable assignment, like this:
def add1(x):
return x + 1
#add1
number = 5
The analogous behaviour to a function decorator would be like this:
number = 5
number = add1(number)
This would result in assigning the value 6 to the variable number. Now imagine that the decorator was just called without returning anything:
number = 5
add1(number)
There is no way this code could possibly assign 6 to the variable number, because number is passed by value, not by reference; in Python, a function cannot assign a new value to a variable in a completely different scope which it has no access to.
A def statement is really a kind of assignment; it assigns the function to the name you defined it with. For example, the function definition def func(): pass compiles to bytecode that does a STORE_NAME, i.e. an assignment:
1 0 LOAD_CONST 0 (<code object func at ...>)
3 LOAD_CONST 1 ('func')
6 MAKE_FUNCTION 0
9 STORE_NAME 0 (func)
So the behaviour of function decorators works the same way as above, for the same reason; the decorator function cannot reassign a new function to the variable func in a completely different scope, because func is passed by value to the decorator, not by reference.
The func = decorator(func) equivalence is actually a bit misleading. To be fully correct, when you use a decorator, the function you defined in the def statement is passed directly to the decorator, not assigned to the local name func before being passed. Here's the bytecode:
1 0 LOAD_NAME 0 (decorate)
3 LOAD_CONST 0 (<code object func at ...>)
6 LOAD_CONST 1 ('func')
9 MAKE_FUNCTION 0
12 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
15 STORE_NAME 1 (func)
Step-by-step:
The decorate function is loaded onto the stack,
The code object for func is loaded onto the stack, then the string 'func', then the MAKE_FUNCTION instruction turns those two into a function which is left on the stack.
The CALL_FUNCTION instruction calls the decorate function (which is still on the stack) with one argument, the func function.
Whatever the decorate function returns is left on the stack, and assigned to the name func by the STORE_NAME instruction.
So if the decorator function didn't return anything, there would be nothing to assign to the name func - not even the original function as in the def statement.

When is the existence of nonlocal variables checked?

I am learning Python and right now I am on the topic of scopes and nonlocal statement.
At some point I thought I figured it all out, but then nonlocal came and broke everything down.
Example number 1:
print( "let's begin" )
def a():
def b():
nonlocal x
x = 20
b()
a()
Running it naturally fails.
What is more interesting is that print() does not get executed. Why?.
My understanding was that enclosing def a() is not executed until print() is executed, and nested def b() is executed only when a() is called. I am confused...
Ok, let's try example number 2:
print( "let's begin" )
def a():
if False: x = 10
def b():
nonlocal x
x = 20
b()
a()
Aaand... it runs fine.
Whaaat?! How did THAT fix it? x = 10 in function a is never executed!
My understanding was that nonlocal statement is evaluated and executed at run-time, searching enclosing function's call contexts and binding local name x to some particular "outer" x. And if there is no x in outer functions - raise an exception. Again, at run-time.
But now it looks like this is done at the time of syntax analysis, with pretty dumb check "look in outer functions for x = blah, if there is something like this - we're fine," even if that x = blah is never executed...
Can anybody explain me when and how nonlocal statement is processed?
You can see what the scope of b knows about free variables (available for binding) from the scope of a, like so:
import inspect
print( "let's begin" )
def a():
if False:
x = 10
def b():
print(inspect.currentframe().f_code.co_freevars)
nonlocal x
x = 20
b()
a()
Which gives:
let's begin
('x',)
If you comment out the nonlocal line, and remove the if statement with x inside, the you'll see the free variables available to b is just ().
So let's look at what bytecode instruction this generates, by putting the definition of a into IPython and then using dis.dis:
In [3]: import dis
In [4]: dis.dis(a)
5 0 LOAD_CLOSURE 0 (x)
2 BUILD_TUPLE 1
4 LOAD_CONST 1 (<code object b at 0x7efceaa256f0, file "<ipython-input-1-20ba94fb8214>", line 5>)
6 LOAD_CONST 2 ('a.<locals>.b')
8 MAKE_FUNCTION 8
10 STORE_FAST 0 (b)
10 12 LOAD_FAST 0 (b)
14 CALL_FUNCTION 0
16 POP_TOP
18 LOAD_CONST 0 (None)
20 RETURN_VALUE
So then let's look at how LOAD_CLOSURE is processed in ceval.c.
TARGET(LOAD_CLOSURE) {
PyObject *cell = freevars[oparg];
Py_INCREF(cell);
PUSH(cell);
DISPATCH();
}
So we see it must look up x from freevars of the enclosing scope(s).
This is mentioned in the Execution Model documentation, where it says:
The nonlocal statement causes corresponding names to refer to previously bound variables in the nearest enclosing function scope. SyntaxError is raised at compile time if the given name does not exist in any enclosing function scope.
First, understand that python will check your module's syntax and if it detects something invalid it raises a SyntaxError which stops it from running at all. Your first example raises a SyntaxError but to understand exactly why is pretty complicated although it is easier to understand if you know how __slots__ works so I will quickly introduce that first.
When a class defines __slots__ it is basically saying that the instances should only have those attributes so each object is allocated memory with space for only those, trying to assign other attributes raises an error
class SlotsTest:
__slots__ = ["a", "b"]
x = SlotsTest()
x.a = 1 ; x.b = 2
x.c = 3 #AttributeError: 'SlotsTest' object has no attribute 'c'
The reason x.c = 3 can't work is that there is no memory space to put a .c attribute in.
If you do not specify __slots__ then all instances are created with a dictionary to store the instance variables, dictionaries do not have any limitations on how many values they contain
class DictTest:
pass
y = DictTest()
y.a = 1 ; y.b = 2 ; y.c = 3
print(y.__dict__) #prints {'a': 1, 'b': 2, 'c': 3}
Python functions work similar to slots. When python checks the syntax of your module it finds all variables assigned (or attempted to be assigned) in each function definition and uses that when constructing frames during execution.
When you use nonlocal x it gives an inner function access to a specific variable in the outer function scope but if there is no variable defined in the outer function then nonlocal x has no space to point to.
Global access doesn't run into the same issue since python modules are created with a dictionary to store its attributes. So global x is allowed even if there is no global reference to x

Order of variable reference and assignment in nested function

From the Google Style Guide on lexical scoping:
A nested Python function can refer to variables defined in enclosing
functions, but can not assign to them.
This specification can be seen here:
def toplevel():
a = 5
def nested():
# Tries to print local variable `a`, but `a` is created locally after,
# so `a` is referenced before assignment. You would need `nonlocal a`
print(a + 2)
a = 7
nested()
return a
toplevel()
# UnboundLocalError: local variable 'a' referenced before assignment
Reversing the order of the two statements in nested gets rid of this issue:
def toplevel():
a = 5
def nested():
# Two statements' order reversed, `a` is now locally assigned and can
# be referenced
a = 7
print(a + 2)
nested()
return a
toplevel()
My question is, what is it about Python's implementation that tells the first function that a will be declared locally (after the print statement)? My understanding is that Python is effectively interpreted line by line. So, shouldn't it default to looking for a nonlocal a at that point in the code?
To elaborate, if I was to use just reference (no assignment),
def toplevel():
a = 5
def nested():
print(a + 2)
nested()
return a
toplevel()
somehow the print statement knows to reference the nonlocal a defined in the enclosing function. But if I assign to a local a after that line, the function is almost too smart for its own good.
My understanding is that Python is effectively interpreted line by line.
That's not the right mental model.
The body of the entire function is analysed to determine which names refer to local variables and which don't.
To simplify your example, the following also gives UnboundLocalError:
def func():
print(a)
a = 2
func()
Here, func() compiles to the following bytecodes:
2 0 LOAD_FAST 0 (a)
3 PRINT_ITEM
4 PRINT_NEWLINE
3 5 LOAD_CONST 1 (2)
8 STORE_FAST 0 (a)
11 LOAD_CONST 0 (None)
14 RETURN_VALUE
Compare this with
def gunc():
print(a)
which compiles to
2 0 LOAD_GLOBAL 0 (a)
3 PRINT_ITEM
4 PRINT_NEWLINE
5 LOAD_CONST 0 (None)
8 RETURN_VALUE
Observe how the absence of assignment to a turns the reference from a local to a global one.
My understanding is that Python is effectively interpreted line by line
That's where you're wrong. The whole file is compiled to bytecode before any interpretation begins.
Also, even if the bytecode compilation pass didn't exist, print(a + 2) wouldn't actually be executed before a = 7 is seen, because it's in a function definition. Python would still know about the a = 7 by the time it actually tries to execute print(a + 2).
As per document
A special quirk of Python is that – if no global statement is in effect – assignments to names always go into the innermost scope. Assignments do not copy data — they just bind names to objects.

Is looping through a generator in a loop over that same generator safe in Python?

From what I understand, a for x in a_generator: foo(x) loop in Python is roughly equivalent to this:
try:
while True:
foo(next(a_generator))
except StopIteration:
pass
That suggests that something like this:
for outer_item in a_generator:
if should_inner_loop(outer_item):
for inner_item in a_generator:
foo(inner_item)
if stop_inner_loop(inner_item): break
else:
bar(outer_item)
would do two things:
Not raise any exceptions, segfault, or anything like that
Iterate over y until it reaches some x where should_inner_loop(x) returns truthy, then loop over it in the inner for until stop_inner_loop(thing) returns true. Then, the outer loop resumes where the inner one left off.
From my admittedly not very good tests, it seems to perform as above. However, I couldn't find anything in the spec guaranteeing that this behavior is constant across interpreters. Is there anywhere that says or implies that I can be sure it will always be like this? Can it cause errors, or perform in some other way? (i.e. do something other than what's described above
N.B. The code equivalent above is taken from my own experience; I don't know if it's actually accurate. That's why I'm asking.
TL;DR: it is safe with CPython (but I could not find any specification of this), although it may not do what you want to do.
First, let's talk about your first assumption, the equivalence.
A for loop actually calls first iter() on the object, then runs next() on its result, until it gets a StopIteration.
Here is the relevant bytecode (a low level form of Python, used by the interpreter itself):
>>> import dis
>>> def f():
... for x in y:
... print(x)
...
>>> dis.dis(f)
2 0 SETUP_LOOP 24 (to 27)
3 LOAD_GLOBAL 0 (y)
6 GET_ITER
>> 7 FOR_ITER 16 (to 26)
10 STORE_FAST 0 (x)
3 13 LOAD_GLOBAL 1 (print)
16 LOAD_FAST 0 (x)
19 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
22 POP_TOP
23 JUMP_ABSOLUTE 7
>> 26 POP_BLOCK
>> 27 LOAD_CONST 0 (None)
30 RETURN_VALUE
GET_ITER calls iter(y) (which itself calls y.__iter__()) and pushes its result on the stack (think of it as a bunch of local unnamed variables), then enters the loop at FOR_ITER, which calls next(<iterator>) (which itself calls <iterator>.__next__()), then executes the code inside the loop, and the JUMP_ABSOLUTE makes the execution comes back to FOR_ITER.
Now, for the safety:
Here are the methods of a generator: https://hg.python.org/cpython/file/101404/Objects/genobject.c#l589
As you can see at line 617, the implementation of __iter__() is PyObject_SelfIter, whose implementation you can find here. PyObject_SelfIter simply returns the object (ie. the generator) itself.
So, when you nest the two loops, both iterate on the same iterator.
And, as you said, they are just calling next() on it, so it's safe.
But be cautious: the inner loop will consume items that will not be consumed by the outer loop.
Even if that is what you want to do, it may not be very readable.
If that is not what you want to do, consider itertools.tee(), which buffers the output of an iterator, allowing you to iterate over its output twice (or more). This is only efficient if the tee iterators stay close to each other in the output stream; if one tee iterator will be fully exhausted before the other is used, it's better to just call list on the iterator to materialize a list out of it.
No, it's not safe (as in, we won't get the outcome that we might have expected).
Consider this:
a = (_ for _ in range(20))
for num in a:
print(num)
Of course, we will get 0 to 19 printed.
Now let's add a bit of code:
a = (_ for _ in range(20))
for num in a:
for another_num in a:
pass
print(num)
The only thing that will be printed is 0.
By the time that we get to the second iteration of the outer loop, the generator will already be exhausted by the inner loop.
We can also do this:
a = (_ for _ in range(20))
for num in a:
for another_num in a:
print(another_num)
If it was safe we would expect to get 0 to 19 printed 20 times, but we actually get it printed only once, for the same reason I mentioned above.
It's not really an answer to your question, but I would recommend not doing this because the code isn't readable. It took me a while to see that you were using y twice even though that's the entire point of your question. Don't make a future reader get confused by this. When I see a nested loop, I'm not expecting what you've done and my brain has trouble seeing it.
I would do it like this:
def generator_with_state(y):
state = 0
for x in y:
if isinstance(x, special_thing):
state = 1
continue
elif state == 1 and isinstance(x, signal):
state = 0
yield x, state
for x, state in generator_with_state(y):
if state == 1:
foo(x)
else:
bar(x)

Stub function to return given argument [duplicate]

I'd like to point to a function that does nothing:
def identity(*args)
return args
my use case is something like this
try:
gettext.find(...)
...
_ = gettext.gettext
else:
_ = identity
Of course, I could use the identity defined above, but a built-in would certainly run faster (and avoid bugs introduced by my own).
Apparently, map and filter use None for the identity, but this is specific to their implementations.
>>> _=None
>>> _("hello")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'NoneType' object is not callable
Doing some more research, there is none, a feature was asked in issue 1673203 And from Raymond Hettinger said there won't be:
Better to let people write their own trivial pass-throughs
and think about the signature and time costs.
So a better way to do it is actually (a lambda avoids naming the function):
_ = lambda *args: args
advantage: takes any number of parameters
disadvantage: the result is a boxed version of the parameters
OR
_ = lambda x: x
advantage: doesn't change the type of the parameter
disadvantage: takes exactly 1 positional parameter
An identity function, as defined in https://en.wikipedia.org/wiki/Identity_function, takes a single argument and returns it unchanged:
def identity(x):
return x
What you are asking for when you say you want the signature def identity(*args) is not strictly an identity function, as you want it to take multiple arguments. That's fine, but then you hit a problem as Python functions don't return multiple results, so you have to find a way of cramming all of those arguments into one return value.
The usual way of returning "multiple values" in Python is to return a tuple of the values - technically that's one return value but it can be used in most contexts as if it were multiple values. But doing that here means you get
>>> def mv_identity(*args):
... return args
...
>>> mv_identity(1,2,3)
(1, 2, 3)
>>> # So far, so good. But what happens now with single arguments?
>>> mv_identity(1)
(1,)
And fixing that problem quickly gives other issues, as the various answers here have shown.
So, in summary, there's no identity function defined in Python because:
The formal definition (a single argument function) isn't that useful, and is trivial to write.
Extending the definition to multiple arguments is not well-defined in general, and you're far better off defining your own version that works the way you need it to for your particular situation.
For your precise case,
def dummy_gettext(message):
return message
is almost certainly what you want - a function that has the same calling convention and return as gettext.gettext, which returns its argument unchanged, and is clearly named to describe what it does and where it's intended to be used. I'd be pretty shocked if performance were a crucial consideration here.
yours will work fine. When the number of parameters is fix you can use an anonymous function like this:
lambda x: x
There is no a built-in identity function in Python. An imitation of the Haskell's id function would be:
identity = lambda x, *args: (x,) + args if args else x
Example usage:
identity(1)
1
identity(1,2)
(1, 2)
Since identity does nothing except returning the given arguments, I do not think that it is slower than a native implementation would be.
No, there isn't.
Note that your identity:
is equivalent to lambda *args: args
Will box its args - i.e.
In [6]: id = lambda *args: args
In [7]: id(3)
Out[7]: (3,)
So, you may want to use lambda arg: arg if you want a true identity function.
NB: This example will shadow the built-in id function (which you will probably never use).
If the speed does not matter, this should handle all cases:
def identity(*args, **kwargs):
if not args:
if not kwargs:
return None
elif len(kwargs) == 1:
return next(iter(kwargs.values()))
else:
return (*kwargs.values(),)
elif not kwargs:
if len(args) == 1:
return args[0]
else:
return args
else:
return (*args, *kwargs.values())
Examples of usage:
print(identity())
None
$identity(1)
1
$ identity(1, 2)
(1, 2)
$ identity(1, b=2)
(1, 2)
$ identity(a=1, b=2)
(1, 2)
$ identity(1, 2, c=3)
(1, 2, 3)
Stub of a single-argument function
gettext.gettext (the OP's example use case) accepts a single argument, message. If one needs a stub for it, there's no reason to return [message] instead of message (def identity(*args): return args). Thus both
_ = lambda message: message
def _(message):
return message
fit perfectly.
...but a built-in would certainly run faster (and avoid bugs introduced by my own).
Bugs in such a trivial case are barely relevant. For an argument of predefined type, say str, we can use str() itself as an identity function (because of string interning it even retains object identity, see id note below) and compare its performance with the lambda solution:
$ python3 -m timeit -s "f = lambda m: m" "f('foo')"
10000000 loops, best of 3: 0.0852 usec per loop
$ python3 -m timeit "str('foo')"
10000000 loops, best of 3: 0.107 usec per loop
A micro-optimisation is possible. For example, the following Cython code:
test.pyx
cpdef str f(str message):
return message
Then:
$ pip install runcython3
$ makecython3 test.pyx
$ python3 -m timeit -s "from test import f" "f('foo')"
10000000 loops, best of 3: 0.0317 usec per loop
Build-in object identity function
Don't confuse an identity function with the id built-in function which returns the 'identity' of an object (meaning a unique identifier for that particular object rather than that object's value, as compared with == operator), its memory address in CPython.
Lots of good answers and discussion are in this topic. I just want to note that, in OP's case where there is a single argument in the identity function, compile-wise it doesn't matter if you use a lambda or define a function (in which case you should probably define the function to stay PEP8 compliant). The bytecodes are functionally identical:
import dis
function_method = compile("def identity(x):\n return x\ny=identity(Type('x', (), dict()))", "foo", "exec")
dis.dis(function_method)
1 0 LOAD_CONST 0 (<code object identity at 0x7f52cc30b030, file "foo", line 1>)
2 LOAD_CONST 1 ('identity')
4 MAKE_FUNCTION 0
6 STORE_NAME 0 (identity)
3 8 LOAD_NAME 0 (identity)
10 LOAD_NAME 1 (Type)
12 LOAD_CONST 2 ('x')
14 LOAD_CONST 3 (())
16 LOAD_NAME 2 (dict)
18 CALL_FUNCTION 0
20 CALL_FUNCTION 3
22 CALL_FUNCTION 1
24 STORE_NAME 3 (y)
26 LOAD_CONST 4 (None)
28 RETURN_VALUE
Disassembly of <code object identity at 0x7f52cc30b030, file "foo", line 1>:
2 0 LOAD_FAST 0 (x)
2 RETURN_VALUE
And lambda
import dis
lambda_method = compile("identity = lambda x: x\ny=identity(Type('x', (), dict()))", "foo", "exec")
dis.dis(lambda_method)
1 0 LOAD_CONST 0 (<code object <lambda> at 0x7f52c9fbbd20, file "foo", line 1>)
2 LOAD_CONST 1 ('<lambda>')
4 MAKE_FUNCTION 0
6 STORE_NAME 0 (identity)
2 8 LOAD_NAME 0 (identity)
10 LOAD_NAME 1 (Type)
12 LOAD_CONST 2 ('x')
14 LOAD_CONST 3 (())
16 LOAD_NAME 2 (dict)
18 CALL_FUNCTION 0
20 CALL_FUNCTION 3
22 CALL_FUNCTION 1
24 STORE_NAME 3 (y)
26 LOAD_CONST 4 (None)
28 RETURN_VALUE
Disassembly of <code object <lambda> at 0x7f52c9fbbd20, file "foo", line 1>:
1 0 LOAD_FAST 0 (x)
2 RETURN_VALUE
Adding to all answers:
Notice there is an implicit convention in Python stdlib, where a HOF defaulting it's key parameter function to the identity function, interprets None as such.
E.g. sorted, heapq.merge, max, min, etc.
So, it is not bad idea to consider your HOF expecting key to following the same pattern.
That is, instead of:
def my_hof(x, key=lambda _: _):
...
(whis is totally right)
You could write:
def my_hof(x, key=None):
if key is None: key = lambda _: _
...
If you want.
The thread is pretty old. But still wanted to post this.
It is possible to build an identity method for both arguments and objects. In the example below, ObjOut is an identity for ObjIn. All other examples above haven't dealt with dict **kwargs.
class test(object):
def __init__(self,*args,**kwargs):
self.args = args
self.kwargs = kwargs
def identity (self):
return self
objIn=test('arg-1','arg-2','arg-3','arg-n',key1=1,key2=2,key3=3,keyn='n')
objOut=objIn.identity()
print('args=',objOut.args,'kwargs=',objOut.kwargs)
#If you want just the arguments to be printed...
print(test('arg-1','arg-2','arg-3','arg-n',key1=1,key2=2,key3=3,keyn='n').identity().args)
print(test('arg-1','arg-2','arg-3','arg-n',key1=1,key2=2,key3=3,keyn='n').identity().kwargs)
$ py test.py
args= ('arg-1', 'arg-2', 'arg-3', 'arg-n') kwargs= {'key1': 1, 'keyn': 'n', 'key2': 2, 'key3': 3}
('arg-1', 'arg-2', 'arg-3', 'arg-n')
{'key1': 1, 'keyn': 'n', 'key2': 2, 'key3': 3}

Categories

Resources