concise way to unpack twice from returned - python

This is probably a dumb question, but I couldn't find anything from searches.
I know if I want to unpack a returned tuple into multiple variables, the syntax is A,B = myfunction()
What if my function is returning a tuple of lists, such as ( [1], [2] ), and I want to assign to my variables integer 1 and 2 instead of the lists [1] and [2]? Is there a shorthand one liner for this or do I need to do
A, B = myfunction()
A = A[0]
B = B[0]
Thanks in advance for your help.

Use the same structure:
[A], [B] = myfunction()
Demo:
>>> def myfunction():
return ( [1], [2] )
>>> [A], [B] = myfunction()
>>> A
1
>>> B
2
Regarding DeepSpace's comment that "this creates 2 new lists": It doesn't:
>>> import dis
>>> dis.dis('[A], [B] = myfunction()')
1 0 LOAD_NAME 0 (myfunction)
2 CALL_FUNCTION 0
4 UNPACK_SEQUENCE 2
6 UNPACK_SEQUENCE 1
8 STORE_NAME 1 (A)
10 UNPACK_SEQUENCE 1
12 STORE_NAME 2 (B)
14 LOAD_CONST 0 (None)
16 RETURN_VALUE
This is what creating a list looks like:
>>> dis.dis('[A]')
1 0 LOAD_NAME 0 (A)
2 BUILD_LIST 1
4 RETURN_VALUE
Context matters.

Use arguments so you can control the indexing from outside.
def myfunction(a_index=None, b_index=None):
...
if a_index is None and b_index is None:
return a, b
return a[a_index], b[b_index]
Then
A, B = myfunction(0, 0)
This is a reusable solution. If you ever need other indexes:
A, B = myfunction(2, 1)
This does not support mixing (ie only passing a_index or b_index) but if you need that behavior you can easily add it.

One of methods of achieving this result is flattening your data. I would do it following way:
def myfunction():
return ([1],[2])
A,B = (i[0] for i in myfunction())
print(A) # 1
print(B) # 2
There exist other methods of flattening in Python, but for is totally enough for 2D structure used in your case (tuple of lists).

Related

How to understand this assignment statement in Python?

I tried in the snippet below:
a, b = a[b] = {}, 5
print('a={0},b={1}'.format(a,b))
The IDE spits out the follows:
a={5: ({...}, 5)},b=5
I have tried S3DEV's advice and execute:
from dis import dis
dis('a, b = a[b] = {}, 5')
And it gives me the follows:
1 0 BUILD_MAP 0
2 LOAD_CONST 0 (5)
4 BUILD_TUPLE 2
6 DUP_TOP
8 UNPACK_SEQUENCE 2
10 STORE_NAME 0 (a)
12 STORE_NAME 1 (b)
14 LOAD_NAME 0 (a)
16 LOAD_NAME 1 (b)
18 STORE_SUBSCR
20 LOAD_CONST 1 (None)
22 RETURN_VALUE
But I still cannot understand why a[b] = a, 5 happened in the step 18 STORE_SUBSCR. Any further explanation?
This is an assignment statement with multiple target_list:s, for which case the docs say that the statement "assigns the single resulting object to each of the target lists, from left to right." Within each target_list, assignments also proceed left to right.
Thus, the statement is equivalent to
a = {}
b = 5
a[b] = a, 5
The reason that the last assignment is a[b]=a,5 and not a,b={},5 is that the value ({}, 5) is only evaluated once, so it's the same dict that gets used throughout. First, a is set to refer to that dict, then the dict — through a — is modified to refer to itself.
EDIT: Perhaps it is clearer to say that the statement is equivalent to
temp1 = {}
temp2 = 5
a = temp1
b = temp2
a[b] = temp1, temp2
Right before the last step, a and temp1 refer to the same object, which thus becomes self-referring after the last step.
This is not code I want to see in production. :)

When reassigning a Python reference to itself, does it un-assign and re-assign, or do nothing?

def new_val(x):
x['a'] = 5
return x
b = {'a': 2}
b = new_val(b) # b re-assigned to ret val
Since dictionaries are mutable, b is a reference pointer to a dictionary, and we pass this pointer into the function new_val.
The reference to the dictionary doesn't change, but the dictionary's reference to 2 changes to 5.
The original variable b should now have 'a' map to 5. However, I'm wondering whether the reference to the dictionary (in other words, the pointer for variable b) ever changes.
We technically, 're-assign' the reference variable b to a reference that happens to be the same.
At a low level, what happens? Is this like a no-op, where some logic recognizes that the reference is the same, or does the reference actually get unassigned and reassigned?
Maybe a simpler example would be:
b = {}
b = b # At a low level, what does this line do?
b = b is not a no-op. The data held by previous b variable is reassigned to a new variable, which name is also b. So it does nothing but is not ignored.
Don't take my word for it. Let's disassemble your last example instead:
def f():
b = {}
b = b
import dis
print(dis.dis(f))
2 0 BUILD_MAP 0
3 STORE_FAST 0 (b)
3 6 LOAD_FAST 0 (b)
9 STORE_FAST 0 (b)
12 LOAD_CONST 0 (None)
15 RETURN_VALUE
As you see there are 2 operations LOAD_FAST and STORE_FAST on b for that b = b line. They achieve nothing useful, yet they are executed.

Why does chained assignment work this way? [duplicate]

This question already has an answer here:
Python Assignment Operator Precedence - (a, b) = a[b] = {}, 5
(1 answer)
Closed 4 years ago.
I found the assignment a = a[1:] = [2] in an article. I tried it in python3 and python2; it all works, but I don't understand how it works. = here is not like in C; C processes = by right to left. How does python process the = operator?
Per the language docs on assignment:
An assignment statement evaluates the expression list (remember that this can be a single expression or a comma-separated list, the latter yielding a tuple) and assigns the single resulting object to each of the target lists, from left to right.
In this case, a = a[1:] = [2] has an expression list [2], and two "target lists", a and a[1:], where a is the left-most "target list".
You can see how this behaves by looking at the disassembly:
>>> import dis
>>> dis.dis('a = a[1:] = [2]')
1 0 LOAD_CONST 0 (2)
2 BUILD_LIST 1
4 DUP_TOP
6 STORE_NAME 0 (a)
8 LOAD_NAME 0 (a)
10 LOAD_CONST 1 (1)
12 LOAD_CONST 2 (None)
14 BUILD_SLICE 2
16 STORE_SUBSCR
18 LOAD_CONST 2 (None)
20 RETURN_VALUE
(The last two lines of the disassembly can be ignored, dis is making a function wrapper to disassemble the string)
The important part to note is that when you do x = y = some_val, some_val is loaded on the stack (in this case by the LOAD_CONST and BUILD_LIST), then the stack entry is duplicated and assigned, from left to right, to the targets given.
So when you do:
a = a[1:] = [2]
it makes two references to a brand new list containing 2, and the first action is a STORE one of these references to a. Next, it stores the second reference to a[1:], but since the slice assignment mutates a itself, it has to load a again, which gets the list just stored. Luckily, list is resilient against self-slice-assignment, or we'd have issues (it would be forever reading the value it just added to add to the end until we ran out of memory and crashed); as is, it behaves as a copy of [2] was assigned to replace any and all elements from index one onwards.
The end result is equivalent to if you'd done:
_ = [2]
a = _
a[1:] = _
but it avoids the use of the _ name.
To be clear, the disassembly annotated:
Make list [2]:
1 0 LOAD_CONST 0 (2)
2 BUILD_LIST 1
Make a copy of the reference to [2]:
4 DUP_TOP
Perform store to a:
6 STORE_NAME 0 (a)
Perform store to a[1:]:
8 LOAD_NAME 0 (a)
10 LOAD_CONST 1 (1)
12 LOAD_CONST 2 (None)
14 BUILD_SLICE 2
16 STORE_SUBSCR
The way I understand such assignments is that this is equivalent to
temp = [2]
a = temp
a[1:] = temp
The resulting value of [2, 2] is consistent with this interpretation.

List concatenation efficiency

Suppose I have two lists, A = [1,2,3,4] and B = [4,5,6]
I would like a list which includes the elements from both A and B. (I don't care if A itself gets altered).
A couple things I could do, and my understanding of them (please tell me if I am wrong):
A.extend(B) (elements of B get added in to A; A itself is altered)
C = A + B (makes a brand new object C, which contains the contents of A and B in it.)
I wanted to understand which is more efficient, so I was wondering if someone can someone please tell me if my assumptions below are incorrect.
In the case of A.extend(B), I'm assuming python only has to do 3 list add operations (the 3 elements of B, each of which it appends to A). However, in doing A + B, doesn't python have to iterate through both lists A and B, in that case doing 7 list add operations? (i.e., it has to make a new list, go through A and put all the elements in it, and then go through B and put all the elements in it).
Am I misunderstanding how the interpreter handles these things, or what these operations do in python?
Below is the bytecode analysis of both operations. There are no major performance difference between two. The only difference is that the .extend way involves a CALL_FUNCTION, which is slightly more expensive in Python than the BINARY_ADD.
But this should not be a problem unless of are working on huge data operations.
>>> import dis
>>> a = [1,2,3,4]
>>> b = [4,5,6]
>>> def f1(a,b):
... a.extend(b)
>>> def f2(a,b):
... c = a+ b
>>> dis.dis(f1)
2 0 LOAD_FAST 0 (a)
3 LOAD_ATTR 0 (extend)
6 LOAD_FAST 1 (b)
9 CALL_FUNCTION 1
12 POP_TOP
13 LOAD_CONST 0 (None)
16 RETURN_VALUE
>>> dis.dis(f2)
2 0 LOAD_FAST 0 (a)
3 LOAD_FAST 1 (b)
6 BINARY_ADD
7 STORE_FAST 2 (c)
10 LOAD_CONST 0 (None)
13 RETURN_VALUE

What is the logic for x,y=y,x to swap the values? [duplicate]

This question already has an answer here:
How does swapping of members in tuples (a,b)=(b,a) work internally?
(1 answer)
Closed 7 years ago.
Code:
x="salil"
y="Ajay"
y,x=x,y
print x +" "+ y
Output:
Ajay salil
What is a logical explanation for this?
The way this mechanism works is a combination of two features -- forming implicit tuples, and tuple/list unpacking.
When you do something = x, y, what Python will do is implicitly create a tuple (a sort of immutable list) comprising of the two elements, x and y. So, the following two lines of code are exactly equivalent:
something = x, y
something = (x, y)
A tuple can, of course, contain more than two elements:
something = a, b, c, d, e
something = (a, b, c, d, e)
The intended use case of this feature is to make it easier/cleaner to do things like return multiple values from a function:
def foo():
return "hello", "world"
The second feature is tuple/list unpacking. Whenever you have a series of variables on the left-hand side, and any sort of list, tuple, or other collection on the other, Python will attempt to match up each of the elements in the right to the ones on the left:
>>> a, b, c = [11, 22, 33]
>>> print(a)
11
>>> print(b)
22
>>> print(c)
33
If it helps, the line a, b, c = [11, 22, 33] is basically identical to doing:
temp = [11, 22, 33]
a = temp[0]
b = temp[1]
c = temp[2]
Note that the right-hand side can be literally any kind of collection, not just tuples or lists. So the following code is valid:
>>> p, q = "az"
>>> print(p + " " + q)
a z
>>>
>>> s, t = {'cat': 'foo', 'dog': 'bar'}
>>> print(s + " " + t)
cat dog
(Though, since dictionaries in Python are not obligated to be in any particular order, and since the order of the keys can be freely scrambled, unpacking them probably isn't going to be useful since you'd potentially get different results each time.)
If the number of variables and the number of elements in the collection do not match up, Python will throw an exception:
>>> a, b = (1, 2, 3)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: too many values to unpack (expected 2)
>>> a, b, c = (1, 2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: need more than 2 values to unpack
So that means that if we called our foo function from above and did the following:
>>> x, y = foo()
...the variable x would equal the string "hello", and the variable y would equal the string "world".
So ultimately, that means that your original snippit of code:
x = "salil"
y = "Ajay"
y, x = x, y
...is logically equivalent to the following:
x = "salil"
y = "Ajay"
temp = (x, y) # evaluates to ("salil", "Ajay")
y, x = temp
...which broken down even more, is logically equivalent to the following:
x = "salil"
y = "Ajay"
temp = (x, y) # evaluates to ("salil", "Ajay")
y = temp[0]
x = temp[1]
Note that you can think of these two operations take place separately. First the tuple is formed and evaluated, then the tuple is unpacked back into the variables. The net effect is that the values of your two variables are interchanged.
(However, as it turns out, the CPython interpreter (the original and 'standard' implementation of Python) does a bit of optimization here: it will optimize the swap and won't do the full tuple unpacking -- see comments below. I'm not sure if other implementations of Python do the same, though I suspect they might. In any case, this sort of optimization is implementation-specific, and is independent to the design of the Python language itself.)
Okay, let's see:
import dis
src = '''
x="salil"
y="Ajay"
y,x=x,y
print x +" "+ y
'''
code = compile(src, '<string>', 'exec')
dis.dis(code)
This produces:
2 0 LOAD_CONST 0 ('salil')
3 STORE_NAME 0 (x)
3 6 LOAD_CONST 1 ('Ajay')
9 STORE_NAME 1 (y)
4 12 LOAD_NAME 0 (x)
15 LOAD_NAME 1 (y)
18 ROT_TWO
19 STORE_NAME 1 (y)
22 STORE_NAME 0 (x)
6 25 LOAD_NAME 0 (x)
28 LOAD_CONST 2 (' ')
31 BINARY_ADD
32 LOAD_NAME 1 (y)
35 BINARY_ADD
36 PRINT_ITEM
37 PRINT_NEWLINE
38 LOAD_CONST 3 (None)
41 RETURN_VALUE
Remember that Python operates as a stack machine. In this case, it optimized the assignment to a ROT_TWO (i.e. swap) instruction. There's also a ROT_THREE instruction. But let's try something else:
import dis
src = 'x, y, z, w = a, b, c, d'
code = compile(src, '<string>', 'exec')
dis.dis(code)
This produces the general form:
1 0 LOAD_NAME 0 (a)
3 LOAD_NAME 1 (b)
6 LOAD_NAME 2 (c)
9 LOAD_NAME 3 (d)
12 BUILD_TUPLE 4
15 UNPACK_SEQUENCE 4
18 STORE_NAME 4 (x)
21 STORE_NAME 5 (y)
24 STORE_NAME 6 (z)
27 STORE_NAME 7 (w)
30 LOAD_CONST 0 (None)
33 RETURN_VALUE
This works for swapping because the right hand side of the = is evaluated first.
So on the right, it evaluates to
'salil', 'Ajay'
and then the assignment of x and y happens
x, y = 'salil', 'Ajay'

Categories

Resources