Python Multiple Variable Declaration - python

I am reading a book, in which they write:
fp1, residuals, rank, sv, rcond = sp.polyfit(x, y, 1, full=True)
It seems sp.polyfit method assigns values to each of these variables in some sort of order.
For instance:
>>> print("Model parameters: %s" % fp1)
Model parameters: [ 2.59619213 989.02487106]
>>> print(res)
[ 3.17389767e+08]
(I don't know where res is being defined... but...) Is this Python's way of creating an object?
In other languages, you might do something like this:
Foo myFooObject = bar.GenerateFoo();
myFooObject.foo();
myFooObject.bar();
The general syntax of python in this way is confusing to me. Thanks for helping me to understand.

This has nothing to do with object creation -- it's an example of tuple (or more generally sequence) unpacking in python.
A tuple is a fixed sequence of items, and you can assign one set to another via a command like
a, b, c = 1, 'two', 3.0
which is the same as
a = 1
b = 'two'
c = 3.0
(Note that you can use this syntax to swap items: a, b = b,a.)
So what is happening in your example is that scipy.poylfit has a line like
return fp, resides, rank, eval, rcondnum
and you are assigning your variables to these.

It's tuple unpacking.
Say you have some tuple:
t = (1, 2, 3)
then you can use that to set three variables with:
x, y, z = t # x becomes 1, y 2, y 3
Your function sp.polyfit simply returns a tuple.
Actually it works with any iterable, not just tuples, but doing it with tuples is by far the most common way. Also, the number of elements in the iterables has to be exactly equal to the number of variables.

Related

What is the meaning of this idiom assigning from an empty list to an empty list?

I ran into a code that calls a function f():
def f():
return 1, []
a, [] = f()
I was wondering why didn't the author use a, _ = f().
Why is this syntax even allowed?
[] = []
a, [] = 444, []
Where can I read more about [] = [] philosophy?
Consider a less strange-looking example:
def f():
return 1, [2, 3]
a, [b, c] = f()
This is an ordinary destructuring assignment; we simple assign a = 1, b = 2 and c = 3 simultaneously. The structure on the left-hand side matches that on the right-hand side (on the left-hand side, we can use [] and () interchangeably, and the corresponding list and tuple objects don't exist after the fact - or at any point; they're purely syntactical here).
The code you have shown is simply the degenerate case of this, where there are zero elements in the sub-sequence. The 1 is assigned to a, and all zero of the elements [] are assigned to zero target names.
I was wondering why not the author used a, _ = f()
I cannot read the author's mind, but one possible reason:
def f():
return 1, g()
# later:
# precondition: the result from g() must be empty for code correctness
a, [] = f() # implicitly raises ValueError if the condition is not met
This is easy to write, but requires some explanation and is perhaps not the greatest approach. Explicit is better than implicit.
Why is this syntax even allowed
Because special cases aren't special enough to break the rules.
The syntax [] = [] is an edge case of assignment to a target list: it unpacks values from an empty iterable into zero target names.
Python's assignment statement allows multiple assignment by using list or tuple syntax for the targets. The source can be any iterable with the correct number of items.
>>> a, b = range(2)
>>> a
0
>>> (a, b) = {1: "one", 2: "two"}
>>> a
1
>>> [a, b] = {3, 5}
>>> a
3
Notably, the left hand side does not denote an actual tuple/list in this case. It merely defines the structure in which the actual assignment targets a and b are, akin to pattern matching.
As an edge case, the syntax also allows specifying one or zero length assignment lists.
>>> # assign single-element iterable into single name
>>> [a] = {15}
>>> a
15
>>> # assign no-element iterable into no name
>>> [] = []
It is worth pointing out that the left and right hand side [] are fundamentally different things. The right hand side ... = [] denotes an actual list object with no elements. The left hand side [] = ... merely denotes "zero names".
Multiple assignment often serves the two-fold purpose of performing an actual assignment while checking the number of items.
>>> # accept any number of secondary items
>>> a, b = 42, [16, 72]
>>> # accept only one secondary item
>>> a, [b] = 42, [16, 72]
...
ValueError: too many values to unpack (expected 1)
By using an empty target list, one enforces that there is an iterable but that it is empty.
>>> # empty iterable: fine
>>> a, [] = 444, []
>>> # non-empty iterable: error
>>> a, [] = 444, ["oops"]
...
ValueError: too many values to unpack (expected 0)
As already mentioned, this syntax is allowed due to the so-called "pattern-matching" in Python. In this particular case, the list is empty, so Python matches nothing with nothing. If you had more elements in the list, this would actually be helpful, as Karl Knechtel has shown in his answer.
Your example is just a special (and useless) case of pattern matching (for matching 0 elements).

Is there a way to splat-assign as tuple instead of list when unpacking?

I was recently surprised to find that the "splat" (unary *) operator always captures slices as a list during item unpacking, even when the sequence being unpacked has another type:
>>> x, *y, z = tuple(range(5))
>>> y
[1, 2, 3] # list, was expecting tuple
Compare to how this assignment would be written without unpacking:
>>> my_tuple = tuple(range(5))
>>> x = my_tuple[0]
>>> y = my_tuple[1:-1]
>>> z = my_tuple[-1]
>>> y
(1, 2, 3)
It is also inconsistent with how the splat operator behaves in function arguments:
>>> def f(*args):
... return args, type(args)
...
>>> f()
((), <class 'tuple'>)
In order to recover y as a tuple after unpacking, I now have to write:
>>> x, *y, z = tuple(range(5))
>>> y = tuple(y)
Which is still much better that the slice-based syntax, but nonetheless suffers from what I consider to be a very unnecessary and unexpected loss of elegance. Is there any way to recover y as a tuple instead of a list without post-assignment processing?
I tried to force python to interpret y as a tuple by writing x, *(*y,), z = ..., but it still ended up as a list. And of course silly things like x, *tuple(y), z don't work in python.
I am currently using Python 3.8.3 but solutions/suggestions/explanations involving higher versions (as they become available) are also welcome.
This is by design. Quoting the official docs about Assignment:
...The first items of the iterable are
assigned, from left to right, to the targets before the starred
target. The final items of the iterable are assigned to the
targets after the starred target. A list of the remaining items
in the iterable is then assigned to the starred target (the list
can be empty).
It is highly probable that the Python user wants to mutate your y afterwards, so the list type was chosen over the tuple.
Quoting the Acceptance section of PEP 3132 that I found through a link in this related question:
After a short discussion on the python-3000 list [1], the PEP was accepted by Guido in its current form. Possible changes discussed were:
Only allow a starred expression as the last item in the exprlist. This would simplify the unpacking code a bit and allow for the
starred expression to be assigned an iterator. This behavior was rejected because it would be too surprising.
Try to give the starred target the same type as the source iterable, for example, b in a, *b = "hello" would be assigned the
string "ello". This may seem nice, but is impossible to get right
consistently with all iterables.
Make the starred target a tuple instead of a list. This would be consistent with a function's *args, but make further processing of the
result harder.
So converting with y = tuple(y) afterwards is your only option.

Python unpacking from list comprehension over empty input

When working with a function that returns multiple values with a tuple, I will often find myself using the following idiom to unpack the results from inside a list comprehension.
fiz, buz = zip(*[f(x) for x in input])
Most of the time this works fine, but it throws a ValueError: need more than 0 values to unpack if input is empty. The two ways I can think of to get around this are
fiz = []
buz = []
for x in input:
a, b = f(x)
fiz.append(a)
buz.append(b)
and
if input:
fiz, buz = zip(*[f(x) for x in input])
else:
fiz, buz = [], []
but neither of these feels especially Pythonic—the former is overly verbose and the latter doesn't work if input is a generator rather than a list (in addition to requiring an if/else where I feel like one really shouldn't be needed).
Is there a good simple way to do this? I've mostly been working in Python 2.7 recently, but would also be interested in knowing any Python 3 solutions if they are different.
If f = lambda x: (x,x**2) then this works
x,y = zip(*map(f,input)) if len(input) else ((),())
If input=[], x=() and y=().
If input=[2], x=(2,) and y=(4,)
If input=[2,3], x=(2,3) and y=(4,9)
They're tuples (not lists), but thats thats pretty easy to change.
I would consider using collections.namedtuple() for this sort of thing. I believe the named tuples are deemed more pythonic, and should avoid the need for complicated list comprehensions and zipping / unpacking.
From the documentation:
>>> p = Point(11, y=22) # instantiate with positional or keyword arguments
>>> p[0] + p[1] # indexable like the plain tuple (11, 22)
33
>>> x, y = p # unpack like a regular tuple
>>> x, y
(11, 22)
>>> p.x + p.y # fields also accessible by name
33
>>> p # readable __repr__ with a name=value style
Point(x=11, y=22)
You could use:
fiz = []
buz = []
results = [fiz, buz]
for x in input:
list(map(lambda res, val: res.append(val), results, f(x)))
print(results)
Note about list(map(...)): in Python3, map returns a generator, so we must use it if we want the lambda to be executed.list does it.
(adapted from my answer to Pythonic way to append output of function to several lists, where you could find other ideas.)

python sort a list of objects based on attributes in the order of the other list

I am working with Python list sort.
I have two lists: one is a list of integers, the other is a list of objects, and the second object list has the attribute id which is also an integer, I want to sort the object list based on the id attribute, in the order of the same id appears in the first list, well, this is an example:
I got a = [1,2,3,4,5]
and b = [o,p,q,r,s], where o.id = 2, p.id = 1, q.id = 3, r.id = 5, s.id = 4
and I want my list b to be sorted in the order of its id appears in list a, which is like this:
sorted_b = [p, o, q, s, r]
Of course, I can achieve this by using nested loops:
sorted_b = []
for i in a:
for j in b:
if j.id == i:
sorted_b.append(j)
break
but this is a classic ugly and non-Python way to solve a problem, I wonder if there is a way to solve this in a rather neat way, like using the sort method, but I don't know how.
>>> from collections import namedtuple
>>> Foo = namedtuple('Foo', 'name id') # this represents your class with id attribute
>>> a = [1,2,3,4,5]
>>> b = [Foo(name='o', id=2), Foo(name='p', id=1), Foo(name='q', id=3), Foo(name='r', id=5), Foo(name='s', id=4)]
>>> sorted(b, key=lambda x: a.index(x.id))
[Foo(name='p', id=1), Foo(name='o', id=2), Foo(name='q', id=3), Foo(name='s', id=4), Foo(name='r', id=5)]
This is a simple way to do it:
# Create a dictionary that maps from an ID to the corresponding object
object_by_id = dict((x.id, x) for x in b)
sorted_b = [object_by_id[i] for i in a]
If the list gets big, it's probably the fastest way, too.
You can do it with a list comprehension, but in general is it the same.
sorted_b = [ y for x in a for y in b if y.id == x ]
There is a sorted function in Python. It takes optional keyword argument cmp. You can pass there your customized function for sorting.
cmp definition from the docs:
custom comparison should return a negative, zero or positive number depending on whether the first argument is considered smaller than, equal to, or larger than the second argument
a = [1,2,3,4,5]
def compare(el1, el2):
if a.index(el1.id) < a.index(el2.id): return -1
if a.index(el1.id) > a.index(el2.id): return 1
return 0
sorted(b, cmp=compare)
This is more straightforward however I would encourage you to use the key argument as jamylak described in his answer, because it's more pythonic and in Python 3 the cmp is not longer supported.

Python: assign values to variables in a list or object [duplicate]

This question already has answers here:
Assigning values to variables in a list using a loop
(5 answers)
Closed 4 years ago.
I want to do the following:
a = 1
b = 2
c = 3
tom = [a,b,c]
for i in tom:
i = 6
The desired result is a = 6
The actual result is a = 1
I'm guessing that there is no way to do this without some kind of exec. Correct?
Initially I misunderstood your question, and I see that kindall got it right. But I think this question shows a need for a more detailed explanation of how Python works. The best way to think about variables in Python is to think of variable names as having arrows attached to them, and values as objects to which those arrows point. Variable names point to objects. So when you do this:
a = 1
You're really saying "a points to 1". And when you do this:
b = a
You're really saying "b points to the same object as a". Under normal circumstances, a variable can't point to another variable name; it can only point at the same object that the other variable points at. So when you do this:
tom = [a, b, c]
You aren't creating a list that points to the variable names a, b, and c; you're creating a list that points to the same objects as a, b, and c. If you change where a points, it has no effect on where tom[0] points. If you change where tom[0] points, it has no effect on where a points.
Now, as others have pointed out, you can programmatically alter the values of variable names, ether using exec as you suggested (not recommended), or by altering globals() (also not recommended). But most of the time, it's just not worth it.
If you really want to do this, my suggestion would be either simply to use a dictionary (as suggested by DzinX) or, for a solution that's closer to the spirit of your question, and still reasonably clean, you could simply use a mutable object. Then you could use getattr and setattr to programmatically alter the attributes of that object like so:
>>> class Foo():
... pass
...
>>> f = Foo()
>>> f.a = 1
>>> setattr(f, 'b', 2)
>>> getattr(f, 'a')
1
>>> f.b
2
Generally, the best solution is to just use a dictionary. But occasionally situations might arise in which the above is better.
tom = [a, b, c] puts the values 1, 2, and 3 into the list. Once these values are in the list, there's no way to know what name(s) are pointing to them. Assuming they are global variables, you could (but almost certainly shouldn't) do this:
tom = ["a", "b", "c"]
for n in tom:
globals()[n] = 1
Trying to set individual variables in a loop is almost always the wrong approach. The values clearly have something in common, otherwise you wouldn't want to change them all in a loop, so store them in a list (or a dictionary, if you need names for them) and access and change them there, instead of using individual variables.
More concisely,
a, b, c = map(lambda x: 6, [1, 2, 3])
or
a, b, c = 1, 2, 3
a, b, c = map(lambda x: 6, [a, b, c])
which could easily be generalised if you want to assign each to different values based on their original values.
It would be best if you didn't use variables, but keys in the dictionary, like this:
values = {
'a': 1,
'b': 2,
'c': 3
}
for k in values:
values[k] = 6
print values['a']
# prints: 6
If you want to change only some values, use:
for k in ['a', 'c']:
values[k] = 6
Here's an idea.
seniority = [range(1, 6)]
people = ["Richard", "Rob", "Steve", "Terry", "Micah"]
people = seniority
print people
output: [[1, 2, 3, 4, 5,]]

Categories

Resources