I have a really simple question about references in python.
I assume you are familiar with this:
aa = [1,2]
bb = aa
aa[0] = 100
print(bb)
As you might guess, the output will be
[100, 2]
and it's totally OK ✔
Let's do another example:
l = [[],[],[]]
a = l[0]
l[0] = [1,2]
print(a)
But here the output is:
[]
I know why that happened.
It's because, on line 3, we made an entirely different list and "replaced it"(not changing but replacing) with l[0]
Now my question is "Can I somehow replace l[0] with [1,2] and also keep a as a reference?
P.S not like l[0].append(1,2)
Short answer: Not really, but there might be something close enough.
The problem is, under the covers, a is a pointer-to-a-list, and l is a pointer-to-a-list-of-pointers-to-lists. When you a = l[0], what that actually translates to at the CPU is "dereference the pointer l, treat the resulting region of memory as a list object, get the first object (which will be the address of another list), and set the value of pointer a to that address". Once you've done that, a and l[0] are only related by concidence; the are two separate pointers that happen, for the moment, to point at the same object. If you assign to either variable, you're changing the value of a pointer, not the contents of the pointed-to object.
Broadly speaking, there's a few ways the computer could practically do what you ask.
Modify the pointed-to object (list) without modifying either pointer. That's what the append function does, along with the many other mutators of python lists. If you want to do this in a way that perhaps more clearly expresses your intent, you could do l[0][:] = [1,2]. That's a list copy operation, copying into the object pointed to by both l[0] and a. This is your best bet as a developer, though note that copy operations are O(n).
Implement a as a pointer-to-a-pointer-to-a-list that is automatically dereferenced (to merely a pointer-to-list) when accessed. This is not, AFAIK, something Python provides any support for; almost no language does. In C you could say list ** a = &(l[0]); but then any time you want to actually do anything with a you'd have to use *a instead.
Tell the interpreter to observe that a is an alias to l[0], rather than its own, separate variable. As far as I know, Python doesn't support this either. In C, you could do it as #define a (l[0]) though you'd want to #undef a when it went out of scope.
Rather than making a a list variable (which is implemented as a pointer-to-list), make it a function: a = lambda: l[0]. This means you have to use a() instead of a anywhere you want to get the actual content of l[0], and you can't assign to l[0] through a() (or through a directly). But it does work, in Python. You could even go so far as to use properties, which would let you skip the parentheses and assign through a, but at the cost of writing a bunch more code to wrap the lists (I'm not aware of a way to attach properties to lists directly, though one might exist, so you'd instead have to create a new object wrapping the list).
If a = l then calling a[0] in that case would yield [1,2], however, you are deleting the list that “a” was referencing, therefore the reference is destroyed as well. You need to either make a = l and call a[0] or reset the reference by calling a = l[0] again.
The original array at l[0] and the [1,2] array are two different objects. there is no way to do that short of re-assigning the a variable to l[0] again once the change has been made.
Related
So if I have a list a and append a to it, I will get a list that contains it own reference.
>>> a = [1,2]
>>> a.append(a)
>>> a
[1, 2, [...]]
>>> a[-1][-1][-1]
[1, 2, [...]]
And this basically results in seemingly infinite recursions.
And not only in lists, dictionaries as well:
>>> b = {'a':1,'b':2}
>>> b['c'] = b
>>> b
{'a': 1, 'b': 2, 'c': {...}}
It could have been a good way to store the list in last element and modify other elements, but that wouldn't work as the change will be seen in every recursive reference.
I get why this happens, i.e. due to their mutability. However, I am interested in actual use-cases of this behavior. Can somebody enlighten me?
The use case is that Python is a dynamically typed language, where anything can reference anything, including itself.
List elements are references to other objects, just like variable names and attributes and the keys and values in dictionaries. The references are not typed, variables or lists are not restricted to only referencing, say, integers or floating point values. Every reference can reference any valid Python object. (Python is also strongly typed, in that the objects have a specific type that won't just change; strings remain strings, lists stay lists).
So, because Python is dynamically typed, the following:
foo = []
# ...
foo = False
is valid, because foo isn't restricted to a specific type of object, and the same goes for Python list objects.
The moment your language allows this, you have to account for recursive structures, because containers are allowed to reference themselves, directly or indirectly. The list representation takes this into account by not blowing up when you do this and ask for a string representation. It is instead showing you a [...] entry when there is a circular reference. This happens not just for direct references either, you can create an indirect reference too:
>>> foo = []
>>> bar = []
>>> foo.append(bar)
>>> bar.append(foo)
>>> foo
[[[...]]]
foo is the outermost [/]] pair and the [...] entry. bar is the [/] pair in the middle.
There are plenty of practical situations where you'd want a self-referencing (circular) structure. The built-in OrderedDict object uses a circular linked list to track item order, for example. This is not normally easily visible as there is a C-optimised version of the type, but we can force the Python interpreter to use the pure-Python version (you want to use a fresh interpreter, this is kind-of hackish):
>>> import sys
>>> class ImportFailedModule:
... def __getattr__(self, name):
... raise ImportError
...
>>> sys.modules["_collections"] = ImportFailedModule() # block the extension module from being loaded
>>> del sys.modules["collections"] # force a re-import
>>> from collections import OrderedDict
now we have a pure-python version we can introspect:
>>> od = OrderedDict()
>>> vars(od)
{'_OrderedDict__hardroot': <collections._Link object at 0x10a854e00>, '_OrderedDict__root': <weakproxy at 0x10a861130 to _Link at 0x10a854e00>, '_OrderedDict__map': {}}
Because this ordered dict is empty, the root references itself:
>>> od._OrderedDict__root.next is od._OrderedDict__root
True
just like a list can reference itself. Add a key or two and the linked list grows, but remains linked to itself, eventually:
>>> od["foo"] = "bar"
>>> od._OrderedDict__root.next is od._OrderedDict__root
False
>>> od._OrderedDict__root.next.next is od._OrderedDict__root
True
>>> od["spam"] = 42
>>> od._OrderedDict__root.next.next is od._OrderedDict__root
False
>>> od._OrderedDict__root.next.next.next is od._OrderedDict__root
True
The circular linked list makes it easy to alter the key ordering without having to rebuild the whole underlying hash table.
However, I am interested in actual use-cases of this behavior. Can somebody enlighten me?
I don't think there are many useful use-cases for this. The reason this is allowed is because there could be some actual use-cases for it and forbidding it would make the performance of these containers worse or increase their memory usage.
Python is dynamically typed and you can add any Python object to a list. That means one would need to make special precautions to forbid adding a list to itself. This is different from (most) typed-languages where this cannot happen because of the typing-system.
So in order to forbid such recursive data-structures one would either need to check on every addition/insertion/mutation if the newly added object already participates in a higher layer of the data-structure. That means in the worst case it has to check if the newly added element is anywhere where it could participate in a recursive data-structure. The problem here is that the same list can be referenced in multiple places and can be part of multiple data-structures already and data-structures such as list/dict can be (almost) arbitrarily deep. That detection would be either slow (e.g. linear search) or would take quite a bit of memory (lookup). So it's cheaper to simply allow it.
The reason why Python detects this when printing is that you don't want the interpreter entering an infinite loop, or get a RecursionError, or StackOverflow. That's why for some operations like printing (but also deepcopy) Python temporarily creates a lookup to detect these recursive data-structures and handles them appropriately.
Consider building a state machine that parse string of digits an check if you can divide by 25 you could model each node as list with 10 outgoing directions consider some connections going to them self
def canDiv25(s):
n0,n1,n1g,n2=[],[],[],[]
n0.extend((n1,n0,n2,n0,n0,n1,n0,n2,n0,n0))
n1.extend((n1g,n0,n2,n0,n0,n1,n0,n2,n0,n0))
n1g.extend(n1)
n2.extend((n1,n0,n2,n0,n0,n1g,n0,n2,n0,n0))
cn=n0
for c in s:
cn=cn[int(c)]
return cn is n1g
for i in range(144):
print("%d %d"%(i,canDiv25(str(i))),end='\t')
While this state machine by itself has little practical it show what could happen. Alternative you could have an simple Adventure game where each room is represented as a dictionary you can go for example NORTH but in that room there is of course a back link to SOUTH. Also sometimes game developers make it so that for example to simulate a tricky path in some dungeon the way in NORTH direction will point to the room itself.
A very simple application of this would be a circular linked list where the last node in a list references the first node. These are useful for creating infinite resources, state machines or graphs in general.
def to_circular_list(items):
head, *tail = items
first = { "elem": head }
current = first
for item in tail:
current['next'] = { "elem": item }
current = current['next']
current['next'] = first
return first
to_circular_list([1, 2, 3, 4])
If it's not obvious how that relates to having a self-referencing object, think about what would happen if you only called to_circular_list([1]), you would end up with a data structure that looks like
item = {
"elem": 1,
"next": item
}
If the language didn't support this kind of direct self referencing, it would be impossible to use circular linked lists and many other concepts that rely on self references as a tool in Python.
The reason this is possible is simply because the syntax of Python doesn't prohibit it, much in the way any C or C++ object can contain a reference to itself. An example might be: https://www.geeksforgeeks.org/self-referential-structures/
As #MSeifert said, you will generally get a RecursionError at some point if you're trying to access the list repeatedly from itself. Code that uses this pattern like this:
a = [1, 2]
a.append(a)
def loop(l):
for item in l:
if isinstance(item, list):
loop(l)
else: print(item)
will eventually crash without some sort of condition. I believe that even print(a) will also crash. However:
a = [1, 2]
while True:
for item in a:
print(item)
will run infinitely with the same expected output as the above. Very few recursive problems don't unravel into a simple while loop. For an example of recursive problems that do require a self-referential structure, look up Ackermann's function: http://mathworld.wolfram.com/AckermannFunction.html. This function could be modified to use a self-referential list.
There is certainly precedent for self-referential containers or tree structures, particularly in math, but on a computer they are all limited by the size of the call stack and CPU time, making it impractical to investigate them without some sort of constraint.
I am new to Python from R. I have recently spent a lot of time reading up on how everything in Python is an object, objects can call methods on themselves, methods are functions within a class, yada yada yada.
Here's what I don't understand. Take the following simple code:
mylist = [3, 1, 7]
If I want to know how many times the number 7 occurs, I can do:
mylist.count(7)
That, of course, returns 1. And if I want to save the count number to another variable:
seven_counts = mylist.count(7)
So far, so good. Other than the syntax, the behavior is similar to R. However, let's say I am thinking about adding a number to my list:
mylist.append(9)
Wait a minute, that method actually changed the variable itself! (i.e., "mylist" has been altered and now includes the number 9 as the fourth digit in the list.) Assigning the code to a new variable (like I did with seven_counts) produces garbage:
newlist = mylist.append(9)
I find the inconsistency in this behavior a bit odd, and frankly undesirable. (Let's say I wanted to see what the result of the append looked like first and then have the option to decide whether or not I want to assign it to a new variable.)
My question is simple:
Is there a way to know in advance if calling a particular method will actually alter your variable (object)?
Aside from reading the documentation (which for some methods will include type annotations specifying the return value) or playing with the method in the interactive interpreter (including using help() to check the docstring for a type annotation), no, you can't know up front just by looking at the method.
That said, the behavior you're seeing is intentional. Python methods either return a new modified copy of the object or modify the object in place; at least among built-ins, they never do both (some methods mutate the object and return a non-None value, but it's never the object just mutated; the pop method of dict and list is an example of this case).
This either/or behavior is intentional; if they didn't obey this rule, you'd have had an even more confusing and hard to identify problem, namely, determining whether append mutated the value it was called on, or returned a new object. You definitely got back a list, but is it a new list or the same list? If it mutated the value it was called on, then
newlist = mylist.append(9)
is a little strange; newlist and mylist would be aliases to the same list (so why have both names?). You might not even notice for a while; you'd continue using newlist, thinking it was independent of mylist, only to look at mylist and discover it was all messed up. By having all such "modify in place" methods return None (or at least, not the original object), the error is discovered more quickly/easily; if you try and use newlist, mistakenly believing it to be a list, you'll immediately get TypeErrors or AttributeErrors.
Basically, the only way to know in advance is to read the documentation. For methods whose name indicates a modifying operation, you can check the return value and often get an idea as to whether they're mutating. It helps to know what types are mutable in the first place; list, dict, set and bytearray are all mutable, and the methods they have that their immutable counterparts (aside from dict, which has no immutable counterpart) lack tend to mutate the object in place.
The default tends to be to mutate the object in place simply because that's more efficient; if you have a 100,000 element list, a default behavior for append that made a new 100,001 element list and returned it would be extremely inefficient (and there would be no obvious way to avoid it). For immutable types (e.g. str, tuple, frozenset) this is unavoidable, and you can use those types if you want a guarantee that the object is never mutate in place, but it comes at a cost of unnecessary creation and destruction of objects that will slow down your code in most cases.
Just checkout the doc:
>>> list.count.__doc__
'L.count(value) -> integer -- return number of occurrences of value'
>>> list.append.__doc__
'L.append(object) -> None -- append object to end'
There isn't really an easy way to tell, but:
immutable object --> no way of changing through method calls
So, for example, tuple has no methods which affect the tuple as it is unchangeable so methods can only return new instances.
And if you "wanted to see what the result of the append looked like first and then have the option to decide whether or not I want to assign it to a new variable" then you can concatenate the list with a new list with one element.
i.e.
>>> l = [1,2,3]
>>> k = l + [4]
>>> l
[1, 2, 3]
>>> k
[1, 2, 3, 4]
Not from merely your invocation (your method call). You can guarantee that the method won't change the object if you pass in only immutable objects, but some methods are defined to change the object -- and will either not be defined for the one you use, or will fault in execution.
I Real Life, you look at the method's documentation: that will tell you exactly what happens.
[I was about to include what Joe Iddon's answer covers ...]
If I try to modify the 'board' list in-place in the way below, it doesn't work, it seems like it generate some new 'board' instead of modify in-place.
def func(self, board):
"""
:type board: List[List[str]]
"""
board = [['A' for j in range(len(board[0]))] for i in range(len(board))]
return
I have to do something like this to modify it in-place, what's the reason? Thanks.
for i in range(len(board)):
for j in range(len(board[0])):
board[i][j] = 'A'
You seem to understand the difference between these two cases, and want to know why Python makes you handle them differently?
I have to do something like this to modify it in-place, what's the reason?
Creating a new copy is something that has a value. So it makes sense for it to be an expression. In fact, list comprehensions would be useless if they weren't expressions.
Mutating a list in-place isn't something that has a value. So, there's no reason to make it an expression, and in fact, it would be weird to do so. Sure, you could come up with some kind of value (like, say, the list being mutated). But that would be at odds with everything else in the design of Python: spam.append(eggs) doesn't return spam, it returns nothing. spam = eggs doesn't have a value. And so on.
Secondarily, the comprehension style feeds very well into the iterable paradigm, which is fundamental to Python. For example, notice that you can turn a list comprehension into a generator comprehension (which gives you a lazy iterator over values that are computed on demand) just by changing the […] to (…). What useful equivalent could there be for mutation?
Making the transforming-copy more convenient also encourages people to use a non-mutating style, which often leads to better answers for many problems. When you want to know how to avoid writing three lines of nested statement to mutate some global, the answer is to stop mutating that global and instead pass in a parameter and return the new value.
Also, the syntax was copied from Haskell, where there is no mutation.
But of course all those "often" and "usually" don't mean "never". Sometimes (unless you're designing a language with no mutation), you need to do things in-place. That's why we have list.sort as well as sorted. (And a lot of work has gone into optimizing the hell out of list.sort; it's not just an afterthought.)
Python doesn't stop you from doing it. It just doesn't bend over quite as far to make it easy as it does for copying.
that is not modifying it in place. The list comprehension syntax [x for y in z] is creating a new list. The original list is not modified by this syntax. Making the name inside the function point to a new list won't change what list the name outside the function is pointing.
In other words, when calling a function python passes a reference to the object, not the name, so there is no easy way to change which object the variable name outside the function is refering to.
ie we have the global declaration, but no local.
"Normally" arguments are local, I think, or they certainly behave that way.
However if an argument is, say, a list and a method is applied which modifies the list, some surprising (to me) results can ensue.
I have 2 questions: what is the proper way to ensure that a variable is truly local?
I wound up using the following, which works, but it can hardly be the proper way of doing it:
def AexclB(a,b):
z = a+[] # yuk
for k in range(0, len(b)):
try: z.remove(b[k])
except: continue
return z
Absent the +[], "a" in the calling scope gets modified, which is not desired.
(The issue here is using a list method,
The supplementary question is, why is there no "local" declaration?
Finally, in trying to pin this down, I made various mickey mouse functions which all behaved as expected except the last one:
def fun4(a):
z = a
z = z.append(["!!"])
return z
a = ["hello"]
print "a=",a
print "fun4(a)=",fun4(a)
print "a=",a
which produced the following on the console:
a= ['hello']
fun4(a)= None
a= ['hello', ['!!']]
...
>>>
The 'None' result was not expected (by me).
Python 2.7 btw in case that matters.
PS: I've tried searching here and elsewhere but not succeeded in finding anything corresponding exactly - there's lots about making variables global, sadly.
It's not that z isn't a local variable in your function. Rather when you have the line z = a, you are making z refer to the same list in memory that a already points to. If you want z to be a copy of a, then you should write z = a[:] or z = list(a).
See this link for some illustrations and a bit more explanation http://henry.precheur.org/python/copy_list
Python will not copy objects unless you explicitly ask it to. Integers and strings are not modifiable, so every operation on them returns a new instance of the type. Lists, dictionaries, and basically every other object in Python are mutable, so operations like list.append happen in-place (and therefore return None).
If you want the variable to be a copy, you must explicitly copy it. In the case of lists, you slice them:
z = a[:]
There is a great answer than will cover most of your question in here which explains mutable and immutable types and how they are kept in memory and how they are referenced. First section of the answer is for you. (Before How do we get around this? header)
In the following line
z = z.append(["!!"])
Lists are mutable objects, so when you call append, it will update referenced object, it will not create a new one and return it. If a method or function do not retun anything, it means it returns None.
Above link also gives an immutable examle so you can see the real difference.
You can not make a mutable object act like it is immutable. But you can create a new one instead of passing the reference when you create a new object from an existing mutable one.
a = [1,2,3]
b = a[:]
For more options you can check here
What you're missing is that all variable assignment in python is by reference (or by pointer, if you like). Passing arguments to a function literally assigns values from the caller to the arguments of the function, by reference. If you dig into the reference, and change something inside it, the caller will see that change.
If you want to ensure that callers will not have their values changed, you can either try to use immutable values more often (tuple, frozenset, str, int, bool, NoneType), or be certain to take copies of your data before mutating it in place.
In summary, scoping isn't involved in your problem here. Mutability is.
Is that clear now?
Still not sure whats the 'correct' way to force the copy, there are
various suggestions here.
It differs by data type, but generally <type>(obj) will do the trick. For example list([1, 2]) and dict({1:2}) both return (shallow!) copies of their argument.
If, however, you have a tree of mutable objects and also you don't know a-priori which level of the tree you might modify, you need the copy module. That said, I've only needed this a handful of times (in 8 years of full-time python), and most of those ended up causing bugs. If you need this, it's a code smell, in my opinion.
The complexity of maintaining copies of mutable objects is the reason why there is a growing trend of using immutable objects by default. In the clojure language, all data types are immutable by default and mutability is treated as a special cases to be minimized.
If you need to work on a list or other object in a truly local context you need to explicitly make a copy or a deep copy of it.
from copy import copy
def fn(x):
y = copy(x)
I have no use for a global variable and never define one explicitly, and yet I seem to have one in my code. Can you help me make it local, please?
def algo(X): # randomized algorithm
while len(X)>2:
# do a bunch of things to nested list X
print(X)
# tracing: output is the same every time, where it shouldn't be.
return len(X[1][1])
def find_min(X): # iterate algo() multiple times to find minimum
m = float('inf')
for i in some_range:
new = algo(X)
m = min(m, new)
return m
X = [[[..], [...]],
[[..], [...]],
[[..], [...]]]
print(find_min(X))
print(X)
# same value as inside the algo() call, even though it shouldn't be affected.
X appears to be behaving like a global variable. The randomized algorithm algo() is really performed only once on the first call because with X retaining its changed value, it never makes it inside the while loop. The purpose of iterations in find_min is thus defeated.
I'm new to python and even newer to this forum, so let me know if I need to clarify my question. Thanks.
update Many thanks for all the answers so far. I almost understand it, except I've done something like this before with a happier result. Could you explain why this code below is different, please?
def qsort(X):
for ...
# recursively sort X in place
count+=1 # count number of operations
return X, count
X = [ , , , ]
Y, count = qsort(X)
print(Y) # sorted
print(X) # original, unsorted.
Thank you.
update II To answer my own second question, the difference seems to be the use of a list method in the first code (not shown) and the lack thereof in the second code.
As others have pointed out already, the problem is that the list is passed as a reference to the function, so the list inside the function body is the very same object as the one you passed to it as an argument. Any mutations your function performs are thus visible from outside.
To solve this, your algo function should operate on a copy of the list that it gets passed.
As you're operating on a nested list, you should use the deepcopy function from the copy module to create a copy of your list that you can freely mutate without affecting anything outside of your function. The built-in list function can also be used to copy lists, but it only creates shallow copies, which isn't what you want for nested lists, because the inner lists would still just be pointers to the same objects.
from copy import deepcopy
def algo (X):
X = deepcopy(X)
...
When you do find_min(X), you are passing the object X (a list in this case) to the function. If that function mutates the list (e.g., by appending to it) then yes, it will affect the original object. Python does not copy objects just because you pass them to a function.
When you pass an object to a python function, the object isn't copied, but rather a pointer to the object is passed.
This makes sense because it greatly speeds up execution - in the case of a long list, there is no need to copy all of its elements.
However, this means that when you modify a passed object (for example, your list X), the modification applies to that object, even after the function returns.
For example:
def foo(x):
x.extend('a')
print x
l = []
foo(l)
foo(l)
Will print:
['a']
['a', 'a']
Python lists are mutable (i.e., they can be changed) and the use of algo within find_min function call does change the value of X (i.e., it is pass-by-reference for lists). See this SO question, for example.