Stub function to return given argument [duplicate] - python

I'd like to point to a function that does nothing:
def identity(*args)
return args
my use case is something like this
try:
gettext.find(...)
...
_ = gettext.gettext
else:
_ = identity
Of course, I could use the identity defined above, but a built-in would certainly run faster (and avoid bugs introduced by my own).
Apparently, map and filter use None for the identity, but this is specific to their implementations.
>>> _=None
>>> _("hello")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'NoneType' object is not callable

Doing some more research, there is none, a feature was asked in issue 1673203 And from Raymond Hettinger said there won't be:
Better to let people write their own trivial pass-throughs
and think about the signature and time costs.
So a better way to do it is actually (a lambda avoids naming the function):
_ = lambda *args: args
advantage: takes any number of parameters
disadvantage: the result is a boxed version of the parameters
OR
_ = lambda x: x
advantage: doesn't change the type of the parameter
disadvantage: takes exactly 1 positional parameter

An identity function, as defined in https://en.wikipedia.org/wiki/Identity_function, takes a single argument and returns it unchanged:
def identity(x):
return x
What you are asking for when you say you want the signature def identity(*args) is not strictly an identity function, as you want it to take multiple arguments. That's fine, but then you hit a problem as Python functions don't return multiple results, so you have to find a way of cramming all of those arguments into one return value.
The usual way of returning "multiple values" in Python is to return a tuple of the values - technically that's one return value but it can be used in most contexts as if it were multiple values. But doing that here means you get
>>> def mv_identity(*args):
... return args
...
>>> mv_identity(1,2,3)
(1, 2, 3)
>>> # So far, so good. But what happens now with single arguments?
>>> mv_identity(1)
(1,)
And fixing that problem quickly gives other issues, as the various answers here have shown.
So, in summary, there's no identity function defined in Python because:
The formal definition (a single argument function) isn't that useful, and is trivial to write.
Extending the definition to multiple arguments is not well-defined in general, and you're far better off defining your own version that works the way you need it to for your particular situation.
For your precise case,
def dummy_gettext(message):
return message
is almost certainly what you want - a function that has the same calling convention and return as gettext.gettext, which returns its argument unchanged, and is clearly named to describe what it does and where it's intended to be used. I'd be pretty shocked if performance were a crucial consideration here.

yours will work fine. When the number of parameters is fix you can use an anonymous function like this:
lambda x: x

There is no a built-in identity function in Python. An imitation of the Haskell's id function would be:
identity = lambda x, *args: (x,) + args if args else x
Example usage:
identity(1)
1
identity(1,2)
(1, 2)
Since identity does nothing except returning the given arguments, I do not think that it is slower than a native implementation would be.

No, there isn't.
Note that your identity:
is equivalent to lambda *args: args
Will box its args - i.e.
In [6]: id = lambda *args: args
In [7]: id(3)
Out[7]: (3,)
So, you may want to use lambda arg: arg if you want a true identity function.
NB: This example will shadow the built-in id function (which you will probably never use).

If the speed does not matter, this should handle all cases:
def identity(*args, **kwargs):
if not args:
if not kwargs:
return None
elif len(kwargs) == 1:
return next(iter(kwargs.values()))
else:
return (*kwargs.values(),)
elif not kwargs:
if len(args) == 1:
return args[0]
else:
return args
else:
return (*args, *kwargs.values())
Examples of usage:
print(identity())
None
$identity(1)
1
$ identity(1, 2)
(1, 2)
$ identity(1, b=2)
(1, 2)
$ identity(a=1, b=2)
(1, 2)
$ identity(1, 2, c=3)
(1, 2, 3)

Stub of a single-argument function
gettext.gettext (the OP's example use case) accepts a single argument, message. If one needs a stub for it, there's no reason to return [message] instead of message (def identity(*args): return args). Thus both
_ = lambda message: message
def _(message):
return message
fit perfectly.
...but a built-in would certainly run faster (and avoid bugs introduced by my own).
Bugs in such a trivial case are barely relevant. For an argument of predefined type, say str, we can use str() itself as an identity function (because of string interning it even retains object identity, see id note below) and compare its performance with the lambda solution:
$ python3 -m timeit -s "f = lambda m: m" "f('foo')"
10000000 loops, best of 3: 0.0852 usec per loop
$ python3 -m timeit "str('foo')"
10000000 loops, best of 3: 0.107 usec per loop
A micro-optimisation is possible. For example, the following Cython code:
test.pyx
cpdef str f(str message):
return message
Then:
$ pip install runcython3
$ makecython3 test.pyx
$ python3 -m timeit -s "from test import f" "f('foo')"
10000000 loops, best of 3: 0.0317 usec per loop
Build-in object identity function
Don't confuse an identity function with the id built-in function which returns the 'identity' of an object (meaning a unique identifier for that particular object rather than that object's value, as compared with == operator), its memory address in CPython.

Lots of good answers and discussion are in this topic. I just want to note that, in OP's case where there is a single argument in the identity function, compile-wise it doesn't matter if you use a lambda or define a function (in which case you should probably define the function to stay PEP8 compliant). The bytecodes are functionally identical:
import dis
function_method = compile("def identity(x):\n return x\ny=identity(Type('x', (), dict()))", "foo", "exec")
dis.dis(function_method)
1 0 LOAD_CONST 0 (<code object identity at 0x7f52cc30b030, file "foo", line 1>)
2 LOAD_CONST 1 ('identity')
4 MAKE_FUNCTION 0
6 STORE_NAME 0 (identity)
3 8 LOAD_NAME 0 (identity)
10 LOAD_NAME 1 (Type)
12 LOAD_CONST 2 ('x')
14 LOAD_CONST 3 (())
16 LOAD_NAME 2 (dict)
18 CALL_FUNCTION 0
20 CALL_FUNCTION 3
22 CALL_FUNCTION 1
24 STORE_NAME 3 (y)
26 LOAD_CONST 4 (None)
28 RETURN_VALUE
Disassembly of <code object identity at 0x7f52cc30b030, file "foo", line 1>:
2 0 LOAD_FAST 0 (x)
2 RETURN_VALUE
And lambda
import dis
lambda_method = compile("identity = lambda x: x\ny=identity(Type('x', (), dict()))", "foo", "exec")
dis.dis(lambda_method)
1 0 LOAD_CONST 0 (<code object <lambda> at 0x7f52c9fbbd20, file "foo", line 1>)
2 LOAD_CONST 1 ('<lambda>')
4 MAKE_FUNCTION 0
6 STORE_NAME 0 (identity)
2 8 LOAD_NAME 0 (identity)
10 LOAD_NAME 1 (Type)
12 LOAD_CONST 2 ('x')
14 LOAD_CONST 3 (())
16 LOAD_NAME 2 (dict)
18 CALL_FUNCTION 0
20 CALL_FUNCTION 3
22 CALL_FUNCTION 1
24 STORE_NAME 3 (y)
26 LOAD_CONST 4 (None)
28 RETURN_VALUE
Disassembly of <code object <lambda> at 0x7f52c9fbbd20, file "foo", line 1>:
1 0 LOAD_FAST 0 (x)
2 RETURN_VALUE

Adding to all answers:
Notice there is an implicit convention in Python stdlib, where a HOF defaulting it's key parameter function to the identity function, interprets None as such.
E.g. sorted, heapq.merge, max, min, etc.
So, it is not bad idea to consider your HOF expecting key to following the same pattern.
That is, instead of:
def my_hof(x, key=lambda _: _):
...
(whis is totally right)
You could write:
def my_hof(x, key=None):
if key is None: key = lambda _: _
...
If you want.

The thread is pretty old. But still wanted to post this.
It is possible to build an identity method for both arguments and objects. In the example below, ObjOut is an identity for ObjIn. All other examples above haven't dealt with dict **kwargs.
class test(object):
def __init__(self,*args,**kwargs):
self.args = args
self.kwargs = kwargs
def identity (self):
return self
objIn=test('arg-1','arg-2','arg-3','arg-n',key1=1,key2=2,key3=3,keyn='n')
objOut=objIn.identity()
print('args=',objOut.args,'kwargs=',objOut.kwargs)
#If you want just the arguments to be printed...
print(test('arg-1','arg-2','arg-3','arg-n',key1=1,key2=2,key3=3,keyn='n').identity().args)
print(test('arg-1','arg-2','arg-3','arg-n',key1=1,key2=2,key3=3,keyn='n').identity().kwargs)
$ py test.py
args= ('arg-1', 'arg-2', 'arg-3', 'arg-n') kwargs= {'key1': 1, 'keyn': 'n', 'key2': 2, 'key3': 3}
('arg-1', 'arg-2', 'arg-3', 'arg-n')
{'key1': 1, 'keyn': 'n', 'key2': 2, 'key3': 3}

Related

How does the python del function work without calling __getitem__

In python, an object is subscriptable when it's class defines a
__getitem__(self, k)
method, allowing us to "get items" through the bracket syntax:
obj[k]
The syntax for the builtin del function is:
del(obj[k])
This deletes item k from (subscriptable) object obj. But apparently, without calling the special getitem method.
See example below
>>> class A:
... def __init__(self, d):
... self._d = d
...
... def __getitem__(self, k):
... print("Calling __getitem__")
... return self._d[k]
...
... def __delitem__(self, k):
... del(self._d[k])
...
>>>
>>> a = A({1: 'one', 2: 'two', 3: 'three'})
>>> a._d
{1: 'one', 2: 'two', 3: 'three'}
>>> a[2]
Calling __getitem__
'two'
>>> del(a[2])
>>> # Note there was no "Calling __getitem__" print!
So it seems like before a[2] forwards the work to the getitem method, the interpreter is aware of the del context, and bypasses it, calling directly
a.__delitem__(2)
instead.
How does that work?
And most of all: Is this mechanism customizable?
Could I for example, write a function foo so that
foo(obj[k])
doesn't ever call
obj.__getitem__(k)
but instead, for example,
obj.foo(k)
del is not a function. It can do this because it's not a function. This is why it's not a function. It's a keyword built into the language as part of the del statement.
To keep in mind that things like del and return aren't functions (and avoid unexpected precedence surprises), it's best to not put parentheses around the "argument":
del whatever
rather than
del(whatever)
del does not take an object and delete it. The thing to the right of del is not an expression to be evaluated. It is a target_list, the same kind of syntax that appears on the left side of the = in an assignment statement:
target_list ::= target ("," target)* [","]
target ::= identifier
| "(" [target_list] ")"
| "[" [target_list] "]"
| attributeref
| subscription
| slicing
| "*" target
To delete a subscription target like obj[k], Python evaluates the expression obj and the expression k to produce two objects, then calls the __delitem__ method of the first object with the second object as the argument. obj[k] is never evaluated as an expression, though pieces of it are.
This all relies on compiler and grammar support, and cannot be done for arbitrary user-defined functions.
You are writing your example code as though del were a function taking
arguments and assuming that the argument a[2] has to be processed first via
__getitem__(). But strictly speaking, del is a statement. That means the language parser can treat it in a special
way – in other words, not necessarily like a function call.
We can use the dis package to get some hints about that. Note how the del
operation gets expressed directly as the very specific DELETE_SUBSCR operation.
It bypasses the BINARY_SUBSCR step used by the len example.
from dis import dis
def f(xs):
del xs[2]
def g(xs):
len(xs[2])
print('\n# del')
dis(f)
print('\n# len')
dis(g)
Output (summarized):
# del
0 LOAD_FAST 0 (xs)
2 LOAD_CONST 1 (2)
4 DELETE_SUBSCR
6 LOAD_CONST 0 (None)
8 RETURN_VALUE
# len
0 LOAD_GLOBAL 0 (len)
2 LOAD_FAST 0 (xs)
4 LOAD_CONST 1 (2)
6 BINARY_SUBSCR
8 CALL_FUNCTION 1
10 POP_TOP
12 LOAD_CONST 0 (None)
14 RETURN_VALUE

list() vs iterable unpacking in Python 3.5+

Is there any practical difference between list(iterable) and [*iterable] in versions of Python that support the latter?
list(x) is a function, [*x] is an expression. You can reassign list, and make it do something else (but you shouldn't).
Talking about cPython, b = list(a) translates to this sequence of bytecodes:
LOAD_NAME 1 (list)
LOAD_NAME 0 (a)
CALL_FUNCTION 1
STORE_NAME 2 (b)
Instead, c = [*a] becomes:
LOAD_NAME 0 (a)
BUILD_LIST_UNPACK 1
STORE_NAME 3 (c)
so you can argue that [*a] might be slightly more efficient, but marginally so.
You can use the standard library module dis to investigate the byte code generated by a function. In this case:
import dis
def call_list(x):
return list(x)
def unpacking(x):
return [*x]
dis.dis(call_list)
# 2 0 LOAD_GLOBAL 0 (list)
# 2 LOAD_FAST 0 (x)
# 4 CALL_FUNCTION 1
# 6 RETURN_VALUE
dis.dis(unpacking)
# 2 0 LOAD_FAST 0 (x)
# 2 BUILD_LIST_UNPACK 1
# 4 RETURN_VALUE
So there is a difference and it is not only the loading of the globally defined name list, which does not need to happen with the unpacking. So it boils down to how the built-in list function is defined and what exactly BUILD_LIST_UNPACK does.
Note that both are actually a lot less code than writing a standard list comprehension for this:
def list_comp(x):
return [a for a in x]
dis.dis(list_comp)
# 2 0 LOAD_CONST 1 (<code object <listcomp> at 0x7f65356198a0, file "<ipython-input-46-dd71fb182ec7>", line 2>)
# 2 LOAD_CONST 2 ('list_comp.<locals>.<listcomp>')
# 4 MAKE_FUNCTION 0
# 6 LOAD_FAST 0 (x)
# 8 GET_ITER
# 10 CALL_FUNCTION 1
# 12 RETURN_VALUE
Since [*iterable] is unpacking, it accepts assignment-like syntax, unlike list(iterable):
>>> [*[]] = []
>>> list([]) = []
File "<stdin>", line 1
SyntaxError: can't assign to function call
You can read more about this here (not useful though).
You can also use list(sequence=iterable), i.e. with a key-word argument:
>>> list(sequence=[])
[]
Again not useful.
There's always going to be some differences between two constructs that do the same thing. Thing is, I wouldn't say the differences in this case are actually practical. Both are expressions that take the iterable, iterate through it and then create a list out of it.
The contract is the same: input is an iterable output is a list populated by the iterables elements.
Yes, list can be rebound to a different name; list(it) is a function call while [*it] is a list display; [*it] is faster with smaller iterables but generally performs the same with larger ones. Heck, one could even throw in the fact that [*it] is three less keystrokes.
Are these practical though? Would I think of them when trying to get a list out of an iterable? Well, maybe the keystrokes in order to stay under 79 characters and get the linter to shut it up.
Apparently there’s a performance difference in CPython, where [*a] overallocates and list() doesn’t: What causes [*a] to overallocate?

In-place custom object unpacking different behavior with __getitem__ python 3.5 vs python 3.6

a follow-up question on this question: i ran the code below on python 3.5 and python 3.6 - with very different results:
class Container:
KEYS = ('a', 'b', 'c')
def __init__(self, a=None, b=None, c=None):
self.a = a
self.b = b
self.c = c
def keys(self):
return Container.KEYS
def __getitem__(self, key):
if key not in Container.KEYS:
raise KeyError(key)
return getattr(self, key)
def __str__(self):
# python 3.6
# return f'{self.__class__.__name__}(a={self.a}, b={self.b}, c={self.c})'
# python 3.5
return ('{self.__class__.__name__}(a={self.a}, b={self.b}, '
'c={self.c})').format(self=self)
data0 = Container(a=1, b=2, c=3)
print(data0)
data3 = Container(**data0, b=7)
print(data3)
as stated in the previous question this raises
TypeError: type object got multiple values for keyword argument 'b'
on python 3.6. but on python 3.5 i get the exception:
KeyError: 0
moreover if i do not raise KeyError but just print out the key and return in __getitem__:
def __getitem__(self, key):
if key not in Container.KEYS:
# raise KeyError(key)
print(key)
return
return getattr(self, key)
this will print out the int sequence 0, 1, 2, 3, 4, .... (python 3.5)
so my questions are:
what has changed between the releases that makes this behave so differently?
where are these integers coming from?
UPDATE : as mentioned in the comment by λuser: implementing __iter__ will change the behavior on python 3.5 to match what python 3.6 does:
def __iter__(self):
return iter(Container.KEYS)
This is actually a complicated conflict between multiple internal operations during unpacking a custom mapping object and creating the caller's arguments. Therefore, if you wan to understand the underlying reasons thoroughly I'd suggest you to look into the source code. However, here are some hints and starting points that you can look into for greater details.
Internally, when you unpack at a caller level, the byte code BUILD_MAP_UNPACK_WITH_CALL(count) pops count mappings from the stack, merges them into a single dictionary and pushes the result. In other hand, the stack effect of this opcode with argument oparg is defined as following:
case BUILD_MAP_UNPACK_WITH_CALL:
return 1 - oparg;
With that being said lets look at the byte codes of an example (in Python-3.5) to see this in action:
>>> def bar(data0):foo(**data0, b=4)
...
>>>
>>> dis.dis(bar)
1 0 LOAD_GLOBAL 0 (foo)
3 LOAD_FAST 0 (data0)
6 LOAD_CONST 1 ('b')
9 LOAD_CONST 2 (4)
12 BUILD_MAP 1
15 BUILD_MAP_UNPACK_WITH_CALL 258
18 CALL_FUNCTION_KW 0 (0 positional, 0 keyword pair)
21 POP_TOP
22 LOAD_CONST 0 (None)
25 RETURN_VALUE
>>>
As you can see, at offset 15 we have BUILD_MAP_UNPACK_WITH_CALL byte code which is responsible for the unpacking.
Now what happens that it returns 0 as the key argument to the __getitem__ method?
Whenever the interpreter encounters an exception during unpacking, which in this case is a KeyError, It stops continuing the push/pop flow and instead of returning the real value of your variable it returns the stack effect which is why the key is 0 at first and if you don't handle the exception each time you get an incremented result (due to the stack size).
Now if you do the same disassembly in Python-3.6 you'll get the following result:
>>> dis.dis(bar)
1 0 LOAD_GLOBAL 0 (foo)
2 BUILD_TUPLE 0
4 LOAD_FAST 0 (data0)
6 LOAD_CONST 1 ('b')
8 LOAD_CONST 2 (4)
10 BUILD_MAP 1
12 BUILD_MAP_UNPACK_WITH_CALL 2
14 CALL_FUNCTION_EX 1
16 POP_TOP
18 LOAD_CONST 0 (None)
20 RETURN_VALUE
Before creating the local variables (LOAD_FAST) and after LOAD_GLOBAL there is a BUILD_TUPLE which is responsible for creating a tuple and consuming count items from the stack.
BUILD_TUPLE(count)
Creates a tuple consuming count items from the stack, and pushes the >resulting tuple onto the stack.
And this is, IMO, why you don't get a key error and instead you get TypeError. Because during the creation of a tuple of arguments it encounters a duplicate name and therefore, properly, returns the TypeError.

Do Python nested functions copy-on-could-write?

Please forgive a Python enthusiast a mostly academic question.
I was interested in the cost, if any, of nested functions - not the functionally justified ones that utilize closure etc., but the keep the outer namespace tidy variety.
So I did a simple measurement:
def inner(x):
return x*x
def flat(x):
return inner(x)
def nested(x):
def inner(x):
return x*x
return inner(x)
# just to get a feel of the cost of having two more lines
def fake_nested(x):
y = x
z = x
return inner(x)
from timeit import timeit
print(timeit('f(3)', globals=dict(f=flat)))
print(timeit('f(3)', globals=dict(f=nested)))
print(timeit('f(3)', globals=dict(f=fake_nested)))
# 0.17055258399341255
# 0.23098028398817405
# 0.19381927204085514
So it seems that there is some overhead and it appears to be more than would be explained by having two more lines.
It seems, however, that the inner def statement is not evaluated each time the outer function is called, indeed the inner function object appears to be cached:
def nested(x):
def inner(x):
return x*x
print(id(inner), id(inner.__code__), id(inner.__closure__))
return inner(x)
nested(3)
x = [list(range(i)) for i in range(5000)] # create some memory pressure
nested(3)
# 139876371445960 139876372477824 8845216
# 139876371445960 139876372477824 8845216
Looking for other things that might add to the longer runtime I stumbled over the following nerdgasm:
def nested(x):
def inner(x):
return x*x
print(id(inner), id(inner.__code__), id(inner.__closure__))
return inner
nested(3)
x = [list(range(i)) for i in range(5000)] # create some memory pressure
a = nested(3)
x = [list(range(i)) for i in range(5000)] # create some memory pressure
nested(3)
# 139906265032768 139906264446704 8845216
# 139906265032768 139906264446704 8845216
# 139906264258624 139906264446704 8845216
It seems that if Python detects that there is an outer reference to the cached nested function, then it creates a new function object.
Now - assuming my reasoning so far is not completely off - my question: What is this good for?
My first idea was "Ok, if the user has a reference to the cached function, they may have messsed with it, so better make a clean new one." But on second thoughts that doesn't seem to wash because the copy is not a deep copy and also what if the user messes with the function and then throws the reference away?
Supplementary question: Does Python do any other fiendishly clever things behind the scenes? And is this at all related to the slower execution of nested compared to flat?
Your reasoning is completely off. Python always creates a new function object each time a def is encountered in the normal program flow - no exceptions.
It is just that in CPython the id of the newly created function likely is the same as that of the old. See "Why does id({}) == id({}) and id([]) == id([]) in CPython?".
Now, if you saved a reference to the inner function, it is not deleted before the next function is created, and naturally the new function cannot coexist at the same memory address.
As for the time difference, a look at the bytecode of the two functions provides some hints. Comparison between nested() and fake_nested() shows that whereas fake_nested just loads already defined global function inner(), nested has to create this function. There will be some overhead here whereas the other operations will be relatively fast.
>>> import dis
>>> dis.dis(flat)
2 0 LOAD_GLOBAL 0 (inner)
3 LOAD_FAST 0 (x)
6 CALL_FUNCTION 1
9 RETURN_VALUE
>>> dis.dis(nested)
2 0 LOAD_CONST 1 (<code object inner at 0x7f2958a33830, file "<stdin>", line 2>)
3 MAKE_FUNCTION 0
6 STORE_FAST 1 (inner)
4 9 LOAD_FAST 1 (inner)
12 LOAD_FAST 0 (x)
15 CALL_FUNCTION 1
18 RETURN_VALUE
>>> dis.dis(fake_nested)
2 0 LOAD_FAST 0 (x)
3 STORE_FAST 1 (y)
3 6 LOAD_FAST 0 (x)
9 STORE_FAST 2 (z)
4 12 LOAD_GLOBAL 0 (inner)
15 LOAD_FAST 0 (x)
18 CALL_FUNCTION 1
21 RETURN_VALUE
As for the inner function caching part, the other answer already clarifies that a new inner() function will be created every time nested() is run. To see this more clearly see the following variation on nested(), cond_nested() which creates same functions with two different names based on a flag. First time this runs with a False flag second function inner2() is created. Next when I change the flag to True the first function inner1() is created and the memory occupied by second function inner2() is freed. So if I run again with True flag, the first function is again created and is assigned a memory that was occupied by second function which is free now.
>>> def cond_nested(x, flag=False):
... if flag:
... def inner1(x):
... return x*x
... cond_nested.func = inner1
... print id(inner1)
... return inner1(x)
... else:
... def inner2(x):
... return x*x
... cond_nested.func = inner2
... print id(inner2)
... return inner2(x)
...
>>> cond_nested(2)
139815557561112
4
>>> cond_nested.func
<function inner2 at 0x7f2958a47b18>
>>> cond_nested(2, flag=True)
139815557561352
4
>>> cond_nested.func
<function inner1 at 0x7f2958a47c08>
>>> cond_nested(3, flag=True)
139815557561112
9
>>> cond_nested.func
<function inner1 at 0x7f2958a47b18>

Python order in which functions in print statement are called?

Let's say I have
def foo(n):
print("foo",n)
def bar(n):
print("bar",n)
print("Hello",foo(1),bar(1))
I would expect the output to be:
Hello
foo 1 None
bar 1 None
But instead I get something which surprised me:
foo 1
bar 1
Hello None None
Why does Python call the functions first before printing the "Hello"? It seems like it would make more sense to print "Hello", then call foo(1), have it print its output, and then print "None" as it's return type. Then call bar(1) and print that output, and print "None" as it's return type. Is there a reason Python (or maybe other languages) call the functions in this way instead of executing each argument in the order they appear?
Edit: Now, my followup question is what's happening internally with Python somehow temporarily storing return values of each argument if it's evaluating the expressions left to right? For example, now I understand it will evaluate each expression left to right, but the final line says Hello None None, so is Python somehow remembering from the execution of each function that the second argument and third arguments have a return value of None? For example, when evaluating foo(), it will print foo 1 and then hit no return statement, so is it storing in memory that foo didn't return a value?
Quoting from the documentation:
Python evaluates expressions from left to right. Notice that while evaluating an assignment, the right-hand side is evaluated before the left-hand side.
Bold emphasis mine. So, all expressions are first evaluated and then passed to print.
Observe the byte code for the print call:
1 0 LOAD_NAME 0 (print)
3 LOAD_CONST 0 ('Hello')
6 LOAD_NAME 1 (foo)
9 LOAD_CONST 1 (1)
12 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
15 LOAD_NAME 2 (bar)
18 LOAD_CONST 1 (1)
21 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
24 CALL_FUNCTION 3 (3 positional, 0 keyword pair)
27 RETURN_VALUE
foo (LINE 12) and bar (LINE 21) are first called, followed by print (LINE 24 - 3 positional args).
As to the question of where these intermediate computed values are stored, that would be the call stack. print accesses the return values simply by poping them off of the stack. - Christian Dean
As is specified in the documentation:
Python evaluates expressions from left to right. Notice that while evaluating an assignment, the right-hand side is evaluated before the left-hand side.
This thus means that if you write:
print("Hello",foo(1),bar(1))
It is equivalent to:
arg1 = "Hello"
arg2 = foo(1)
arg3 = bar(1)
print(arg1,arg2,arg3)
So the arguments are evaluated before the function call.
This also happens when we for instance have a tree:
def foo(*x):
print(x)
return x
print(foo(foo('a'),foo('b')),foo(foo('c'),foo('d')))
This prints as:
>>> print(foo(foo('a'),foo('b')),foo(foo('c'),foo('d')))
('a',)
('b',)
(('a',), ('b',))
('c',)
('d',)
(('c',), ('d',))
(('a',), ('b',)) (('c',), ('d',))
Since Python thus evaluates arguments left-to-right. It will first evaluate foo(foo('a'),foo('b')), but in order to evaluate foo(foo('a'),foo('b')), it first needs to evaluate foo('a'), followed by foo('b'). Then it can all foo(foo('a'),foo('b')) with the results of the previous calls.
Then it wants to evaluate the second argument foo(foo('c'),foo('d')). But in order to do this, it thus first evaluates foo('c') and foo('d'). Next it can evaluate foo(foo('c'),foo('d')), and then finally it can evaluate the final expression: print(foo(foo('a'),foo('b')),foo(foo('c'),foo('d'))).
So the evaluation is equivalent to:
arg11 = foo('a')
arg12 = foo('b')
arg1 = foo(arg11,arg12)
arg21 = foo('c')
arg22 = foo('d')
arg2 = foo(arg11,arg12)
print(arg1,arg2)
The enclosing function is not called until all of its arguments have been evaluated. This is consistent with the basic rules of mathematics that state that operations within parentheses are performed before those outside. As such print() will always happen after both foo() and bar().
The answer is simple:
In python the arguments of a function like print are always first evaluated left to right.
Take a look at this stackoverflow question: In which order is an if statement evaluated in Python
And None is just the return value of the function. It executes the function first and then print its return value

Categories

Resources