Python class function default variables are class objects? [duplicate] - python

This question already has answers here:
Why does using `arg=None` fix Python's mutable default argument issue?
(5 answers)
"Least Astonishment" and the Mutable Default Argument
(33 answers)
Closed 9 months ago.
I was writing some code this afternoon, and stumbled across a bug in my code. I noticed that the default values for one of my newly created objects was carrying over from another object! For example:
class One(object):
def __init__(self, my_list=[]):
self.my_list = my_list
one1 = One()
print(one1.my_list)
[] # empty list, what you'd expect.
one1.my_list.append('hi')
print(one1.my_list)
['hi'] # list with the new value in it, what you'd expect.
one2 = One()
print(one2.my_list)
['hi'] # Hey! It saved the variable from the other One!
So I know it can be solved by doing this:
class One(object):
def __init__(self, my_list=None):
self.my_list = my_list if my_list is not None else []
What I would like to know is... Why? Why are Python classes structured so that the default values are saved across instances of the class?

This is a known behaviour of the way Python default values work, which is often surprising to the unwary. The empty array object [] is created at the time of definition of the function, rather than at the time it is called.
To fix it, try:
def __init__(self, my_list=None):
if my_list is None:
my_list = []
self.my_list = my_list

Several others have pointed out that this is an instance of the "mutable default argument" issue in Python. The basic reason is that the default arguments have to exist "outside" the function in order to be passed into it.
But the real root of this as a problem has nothing to do with default arguments. Any time it would be bad if a mutable default value was modified, you really need to ask yourself: would it be bad if an explicitly provided value was modified? Unless someone is extremely familiar with the guts of your class, the following behaviour would also be very surprising (and therefore lead to bugs):
>>> class One(object):
... def __init__(self, my_list=[]):
... self.my_list = my_list
...
>>> alist = ['hello']
>>> one1 = One(alist)
>>> alist.append('world')
>>> one2 = One(alist)
>>>
>>> print(one1.my_list) # Huh? This isn't what I initialised one1 with!
['hello', 'world']
>>> print(one2.my_list) # At least this one's okay...
['hello', 'world']
>>> del alist[0]
>>> print one2.my_list # What the hell? I just modified a local variable and a class instance somewhere else got changed?
['world']
9 times out of 10, if you discover yourself reaching for the "pattern" of using None as the default value and using if value is None: value = default, you shouldn't be. You should be just not modifying your arguments! Arguments should not be treated as owned by the called code unless it is explicitly documented as taking ownership of them.
In this case (especially because you're initialising a class instance, so the mutable variable is going to live a long time and be used by other methods and potentially other code that retrieves it from the instance) I would do the following:
class One(object):
def __init__(self, my_list=[])
self.my_list = list(my_list)
Now you're initialising the data of your class from a list provided as input, rather than taking ownership of a pre-existing list. There's no danger that two separate instances end up sharing the same list, nor that the list is shared with a variable in the caller which the caller may want to continue using. It also has the nice effect that your callers can provide tuples, generators, strings, sets, dictionaries, home-brewed custom iterable classes, etc, and you know you can still count on self.my_list having an append method, because you made it yourself.
There's still a potential problem here, if the elements contained in the list are themselves mutable then the caller and this instance can still accidentally interfere with each other. I find it not to very often be a problem in practice in my code (so I don't automatically take a deep copy of everything), but you have to be aware of it.
Another issue is that if my_list can be very large, the copy can be expensive. There you have to make a trade-off. In that case, maybe it is better to just use the passed-in list after all, and use the if my_list is None: my_list = [] pattern to prevent all default instances sharing the one list. But if you do that you need to make it clear, either in documentation or the name of the class, that callers are relinquishing ownership of the lists they use to initialise the instance. Or, if you really want to be constructing a list solely for the purpose of wrapping up in an instance of One, maybe you should figure out how to encapsulate the creation of the list inside the initialisation of One, rather than constructing it first; after all, it's really part of the instance, not an initialising value. Sometimes this isn't flexible enough though.
And sometimes you really honestly do want to have aliasing going on, and have code communicating by mutating values they both have access to. I think very hard before I commit to such a design, however. And it will surprise others (and you when you come back to the code in X months), so again documentation is your friend!
In my opinion, educating new Python programmers about the "mutable default argument" gotcha is actually (slightly) harmful. We should be asking them "Why are you modifying your arguments?" (and then pointing out the way default arguments work in Python). The very fact of a function having a sensible default argument is often a good indicator that it isn't intended as something that receives ownership of a pre-existing value, so it probably shouldn't be modifying the argument whether or not it got the default value.

Basically, python function objects store a tuple of default arguments, which is fine for immutable things like integers, but lists and other mutable objects are often modified in-place, resulting in the behavior you observed.

This is standard behavior of default arguments anywhere in Python, not just in classes.
For more explanation, see Mutable defaults for function/method arguments.

Python functions are objects. Default arguments of a function are attributes of that function. So if the default value of an argument is mutable and it's modified inside your function, the changes are reflected in subsequent calls to that function.

Not an answer, but it's worth noting this is also true for class variables defined outside any class functions.
Example:
>>> class one:
... myList = []
...
>>>
>>> one1 = one()
>>> one1.myList
[]
>>> one2 = one()
>>> one2.myList.append("Hello Thar!")
>>>
>>> one1.myList
['Hello Thar!']
>>>
Note that not only does the value of myList persist, but every instance of myList points to the same list.
I ran into this bug/feature myself, and spent something like 3 hours trying to figure out what was going on. It's rather challenging to debug when you are getting valid data, but it's not from the local computations, but previous ones.
It's made worse since this is not just a default argument. You can't just put myList in the class definition, it has to be set equal to something, although whatever it is set equal to is only evaluated once.
The solution, at least for me, was to simply create all the class variable inside __init__.

Related

How do you know in advance if a method (or function) will alter the variable when called?

I am new to Python from R. I have recently spent a lot of time reading up on how everything in Python is an object, objects can call methods on themselves, methods are functions within a class, yada yada yada.
Here's what I don't understand. Take the following simple code:
mylist = [3, 1, 7]
If I want to know how many times the number 7 occurs, I can do:
mylist.count(7)
That, of course, returns 1. And if I want to save the count number to another variable:
seven_counts = mylist.count(7)
So far, so good. Other than the syntax, the behavior is similar to R. However, let's say I am thinking about adding a number to my list:
mylist.append(9)
Wait a minute, that method actually changed the variable itself! (i.e., "mylist" has been altered and now includes the number 9 as the fourth digit in the list.) Assigning the code to a new variable (like I did with seven_counts) produces garbage:
newlist = mylist.append(9)
I find the inconsistency in this behavior a bit odd, and frankly undesirable. (Let's say I wanted to see what the result of the append looked like first and then have the option to decide whether or not I want to assign it to a new variable.)
My question is simple:
Is there a way to know in advance if calling a particular method will actually alter your variable (object)?
Aside from reading the documentation (which for some methods will include type annotations specifying the return value) or playing with the method in the interactive interpreter (including using help() to check the docstring for a type annotation), no, you can't know up front just by looking at the method.
That said, the behavior you're seeing is intentional. Python methods either return a new modified copy of the object or modify the object in place; at least among built-ins, they never do both (some methods mutate the object and return a non-None value, but it's never the object just mutated; the pop method of dict and list is an example of this case).
This either/or behavior is intentional; if they didn't obey this rule, you'd have had an even more confusing and hard to identify problem, namely, determining whether append mutated the value it was called on, or returned a new object. You definitely got back a list, but is it a new list or the same list? If it mutated the value it was called on, then
newlist = mylist.append(9)
is a little strange; newlist and mylist would be aliases to the same list (so why have both names?). You might not even notice for a while; you'd continue using newlist, thinking it was independent of mylist, only to look at mylist and discover it was all messed up. By having all such "modify in place" methods return None (or at least, not the original object), the error is discovered more quickly/easily; if you try and use newlist, mistakenly believing it to be a list, you'll immediately get TypeErrors or AttributeErrors.
Basically, the only way to know in advance is to read the documentation. For methods whose name indicates a modifying operation, you can check the return value and often get an idea as to whether they're mutating. It helps to know what types are mutable in the first place; list, dict, set and bytearray are all mutable, and the methods they have that their immutable counterparts (aside from dict, which has no immutable counterpart) lack tend to mutate the object in place.
The default tends to be to mutate the object in place simply because that's more efficient; if you have a 100,000 element list, a default behavior for append that made a new 100,001 element list and returned it would be extremely inefficient (and there would be no obvious way to avoid it). For immutable types (e.g. str, tuple, frozenset) this is unavoidable, and you can use those types if you want a guarantee that the object is never mutate in place, but it comes at a cost of unnecessary creation and destruction of objects that will slow down your code in most cases.
Just checkout the doc:
>>> list.count.__doc__
'L.count(value) -> integer -- return number of occurrences of value'
>>> list.append.__doc__
'L.append(object) -> None -- append object to end'
There isn't really an easy way to tell, but:
immutable object --> no way of changing through method calls
So, for example, tuple has no methods which affect the tuple as it is unchangeable so methods can only return new instances.
And if you "wanted to see what the result of the append looked like first and then have the option to decide whether or not I want to assign it to a new variable" then you can concatenate the list with a new list with one element.
i.e.
>>> l = [1,2,3]
>>> k = l + [4]
>>> l
[1, 2, 3]
>>> k
[1, 2, 3, 4]
Not from merely your invocation (your method call). You can guarantee that the method won't change the object if you pass in only immutable objects, but some methods are defined to change the object -- and will either not be defined for the one you use, or will fault in execution.
I Real Life, you look at the method's documentation: that will tell you exactly what happens.
[I was about to include what Joe Iddon's answer covers ...]

Why does PyCharm warn about mutable default arguments? How can I work around them?

I am using PyCharm (Python 3) to write a Python function which accepts a dictionary as an argument with attachment={}.
def put_object(self, parent_object, connection_name, **data):
...
def put_wall_post(self, message, attachment={}, profile_id="me"):
return self.put_object(profile_id, "feed", message=message, **attachment)
In the IDE, attachment={} is colored yellow. Moving the mouse over it shows a warning.
Default arguments value is mutable
This inspection detects when a mutable value as list or dictionary is
detected in a default value for an argument.
Default argument values are evaluated only once at function definition
time, which means that modifying the default value of the argument
will affect all subsequent calls of the function.
What does this mean and how can I resolve it?
If you don't alter the "mutable default argument" or pass it anywhere where it could be altered just ignore the message, because there is nothing to be "fixed".
In your case you only unpack (which does an implicit copy) the "mutable default argument" - so you're safe.
If you want to "remove that warning message" you could use None as default and set it to {} when it's None:
def put_wall_post(self,message,attachment=None,profile_id="me"):
if attachment is None:
attachment = {}
return self.put_object(profile_id,"feed",message = message,**attachment)
Just to explain the "what it means": Some types in Python are immutable (int, str, ...) others are mutable (like dict, set, list, ...). If you want to change immutable objects another object is created - but if you change mutable objects the object remains the same but it's contents are changed.
The tricky part is that class variables and default arguments are created when the function is loaded (and only once), that means that any changes to a "mutable default argument" or "mutable class variable" are permanent:
def func(key, value, a={}):
a[key] = value
return a
>>> print(func('a', 10)) # that's expected
{'a': 10}
>>> print(func('b', 20)) # that could be unexpected
{'b': 20, 'a': 10}
PyCharm probably shows this Warning because it's easy to get it wrong by accident (see for example Why do mutable default arguments remember mutations between function calls? and all linked questions). However, if you did it on purpose (Good uses for mutable function argument default values?) the Warning could be annoying.
You can replace mutable default arguments with None. Then check inside the function and assign the default:
def put_wall_post(self, message, attachment=None, profile_id="me"):
attachment = attachment if attachment else {}
return self.put_object(profile_id, "feed", message=message, **attachment)
This works because None evaluates to False so we then assign an empty dictionary.
In general you may want to explicitly check for None as other values could also evaluate to False, e.g. 0, '', set(), [], etc, are all False-y. If your default isn't 0 and is 5 for example, then you wouldn't want to stomp on 0 being passed as a valid parameter:
def function(param=None):
param = 5 if param is None else param
This is a warning from the interpreter that because your default argument is mutable, you might end up changing the default if you modify it in-place, which could lead to unexpected results in some cases. The default argument is really just a reference to the object you indicate, so much like when you alias a list to two different identifiers, e.g.,
>>> a={}
>>> b=a
>>> b['foo']='bar'
>>> a
{'foo': 'bar'}
if the object is changed through any reference, whether during that call to the function, a separate call, or even outside the function, it will affect future calls the function. If you're not expecting the behavior of the function to change at runtime, this could be a cause for bugs. Every time the function is called, it's the same name being bound to the same object. (in fact, I'm not sure if it even goes through the whole name binding process each time? I think it just gets another reference.)
The (likely unwanted) behavior
You can see the effect of this by declaring the following and calling it a few times:
>>> def mutable_default_arg (something = {'foo':1}):
something['foo'] += 1
print (something)
>>> mutable_default_arg()
{'foo': 2}
>>> mutable_default_arg()
{'foo': 3}
Wait, what? yes, because the object referenced by the argument doesn't change between calls, changing one of its elements changes the default. If you use an immutable type, you don't have to worry about this because it shouldn't be possible, under standard circumstances, to change an immutable's data. I don't know if this holds for user-defined classes, but that is why this is usually just addressed with "None" (that, and you only need it as a placeholder, nothing more. Why spend the extra RAM on something more complicated?)
Duct-taped problems...
In your case, you were saved by an implicit copy, as another answer pointed out, but it's never a good idea to rely on implicit behavior, especially unexpected implicit behavior, since it could change. That's why we say "explicit is better than implicit". Besides which, implicit behavior tends to hide what's going on, which could lead you or another programmer to removing the duct tape.
...with simple (permanent) solutions
You can avoid this bug magnet completely and satisfy the warning by, as others have suggested, using an immutable type such as None, checking for it at the start of the function, and if found, immediately replacing it before your function gets going:
def put_wall_post(self, message, attachment=None, profile_id="me"):
if attachment is None:
attachment = {}
return self.put_object(profile_id, "feed", message=message, **attachment)
Since immutable types force you to replace them (Technically, you are binding a new object to the same name. In the above, the reference to None is overwritten when attachment is rebound to the new empty dictionary) instead of updating them, you know attachment will always start as None unless specified in the call parameters, thus avoiding the risk of unexpected changes to the default.
(As an aside, when in doubt whether an object is the same as another object, compare them with is or check id(object). The former can check whether two references refer to the same object, and the latter can be useful for debugging by printing a unique identifier—typically the memory location—for the object.)
To rephrase the warning: every call to this function, if it uses the default, will use the same object. So long as you never change that object, the fact that it is mutable won't matter. But if you do change it, then subsequent calls will start with the modified value, which is probably not what you want.
One solution to avoid this issue would be to have the default be a immutable type like None, and set the parameter to {} if that default is used:
def put_wall_post(self,message,attachment=None,profile_id="me"):
if attachment==None:
attachment={}
return self.put_object(profile_id,"feed",message = message,**attachment)
Lists are mutable and as declaring default with def at declaration at compile time will assign a mutable list to the variable at some address
def abc(a=[]):
a.append(2)
print(a)
abc() #prints [2]
abc() #prints [2, 2] as mutable thus changed the same assigned list at func delaration points to same address and append at the end
abc([4]) #prints [4, 2] because new list is passed at a new address
abc() #prints [2, 2, 2] took same assigned list and append at the end
 
To correct this:
def abc(a=None):
if not a:
a=[]
a.append(2)
print(a)
 
This works as every time a new list is created and not referencing the old list as a value always null thus assigning new list at new address

Method to replace assignation into functions

I know it's not possible to assign a new value to a variable passed as a parameter to a function.
>>> a = 1
>>> def foo(bar):
... bar = 2
...
>>> foo(a)
>>> a
1
But it is possible to modify it with methods.
>>> a = [1, 2]
>>> def foo(bar):
... bar.append(3)
...
>>> foo(a)
>>> a
[1, 2, 3]
But is there a method to replace assignation (giving the variable a whole new value). Something to make my first example work :
>>> a = 1
>>> def foo(bar):
... bar.assign(2)
...
>>> foo(a)
>>> a
2
The only alternatives I found are global variables and designing my own classes.
So my questions are :
Is there such method? (or alternative?)
If there isn't, I must be more a design choice than a oversight. Why this choice? If there are methods to modify a part of the value/content, why not a method to replace/give a whole new value/content?
Everything in python is an object, and objects in python are mutable, except when they are not. Basic types like strings, numbers (int, float) are not mutable ie. 1 is always 1, and you can never make 1 something else.
This is actually the case with most so called object oriented languages and is more of an optimization.
As you said, you would need to create your own Object that wraps the immutable types in order to mutate the internals of your new object. You can even create a very simple class to do this
class Mutatable:
def __init__(self, value):
self.value = value
def assign(self, value):
# in python you cannot overload the = (assignment) operator
self.value = value
Now from your example you can say
>>> a = Mutatable(1)
>>> def foo(bar):
... bar.assign(2)
...
>>> foo(a)
>>> a.value
out: 2
As some of the other posters mentioned. General programming advice. Overuse of mutations creates for very hard to debug applications. Functions that return values and raise Exceptions are way easier to test and debug.
First off, every every thing in python is object (except some singleton and interns) and every object exist in a scope contain global, local, built-in and enclosing scopes. And the main purpose of using scopes is obviously preserving a set of command and script which are following a specific aim.
Therefore it's not reasonable to destroy this architecture by letting variables in different scopes to impact each other.
But in Python has provided us with some tools in order to make some variables available in upper scope1, like global statement.
Note: Regarding the second example, remember that changing a mutable object inside the function may impact the caller and that's because mutable objects like lists actually are a container of pointers to the actual objects and when you change one pointer it will affect on the main object.
1. The hierarchy of scopes in python from inner to outer is: Local, Enclosing, Global, Built-in. Known as LEGB manner.
It is a consequence of two design decisions. First, functions arguments in Python are passed by assignment. That is, if you have a call like
foo(a)
it translates to, roughly
bar = a
<function body, verbatim>
That's why you can't just pass a pointer to any variable as you would in, say, C. If you want to mutate a function argument, it must have some mutable internal structure. That's where the second design decision comes in: integers in Python are immutable, along with other types including other kinds of numbers, strings or tuples. I don't know the original motivation behind this decision, but I see two main advantages:
easier for a human to reason about the code (you can be sure your integer does not magically change when you pass it to a function)
easier for a computer to reason about the code (for the purpose of optimization or static analysis), because a lot of function arguments are just numbers or strings
My personal opinion is that you should avoid functions that mutate their arguments where possible. At the very least, they should be methods and mutate the corresponding object. Otherwise the code becomes error-prone and hard to maintain and test. Therefore I fully support this immutability, and try to use immutable objects everywhere.
If you want to change the value of a variable in some function without global or return instructions, then you can pass its name in function parameters and use globals() :
>>> def foo(bar):
globals()[bar] = 3
>>> x = 7
>>> x
7
foo('x')
>>> x
3

How to make a variable (truly) local to a procedure or function

ie we have the global declaration, but no local.
"Normally" arguments are local, I think, or they certainly behave that way.
However if an argument is, say, a list and a method is applied which modifies the list, some surprising (to me) results can ensue.
I have 2 questions: what is the proper way to ensure that a variable is truly local?
I wound up using the following, which works, but it can hardly be the proper way of doing it:
def AexclB(a,b):
z = a+[] # yuk
for k in range(0, len(b)):
try: z.remove(b[k])
except: continue
return z
Absent the +[], "a" in the calling scope gets modified, which is not desired.
(The issue here is using a list method,
The supplementary question is, why is there no "local" declaration?
Finally, in trying to pin this down, I made various mickey mouse functions which all behaved as expected except the last one:
def fun4(a):
z = a
z = z.append(["!!"])
return z
a = ["hello"]
print "a=",a
print "fun4(a)=",fun4(a)
print "a=",a
which produced the following on the console:
a= ['hello']
fun4(a)= None
a= ['hello', ['!!']]
...
>>>
The 'None' result was not expected (by me).
Python 2.7 btw in case that matters.
PS: I've tried searching here and elsewhere but not succeeded in finding anything corresponding exactly - there's lots about making variables global, sadly.
It's not that z isn't a local variable in your function. Rather when you have the line z = a, you are making z refer to the same list in memory that a already points to. If you want z to be a copy of a, then you should write z = a[:] or z = list(a).
See this link for some illustrations and a bit more explanation http://henry.precheur.org/python/copy_list
Python will not copy objects unless you explicitly ask it to. Integers and strings are not modifiable, so every operation on them returns a new instance of the type. Lists, dictionaries, and basically every other object in Python are mutable, so operations like list.append happen in-place (and therefore return None).
If you want the variable to be a copy, you must explicitly copy it. In the case of lists, you slice them:
z = a[:]
There is a great answer than will cover most of your question in here which explains mutable and immutable types and how they are kept in memory and how they are referenced. First section of the answer is for you. (Before How do we get around this? header)
In the following line
z = z.append(["!!"])
Lists are mutable objects, so when you call append, it will update referenced object, it will not create a new one and return it. If a method or function do not retun anything, it means it returns None.
Above link also gives an immutable examle so you can see the real difference.
You can not make a mutable object act like it is immutable. But you can create a new one instead of passing the reference when you create a new object from an existing mutable one.
a = [1,2,3]
b = a[:]
For more options you can check here
What you're missing is that all variable assignment in python is by reference (or by pointer, if you like). Passing arguments to a function literally assigns values from the caller to the arguments of the function, by reference. If you dig into the reference, and change something inside it, the caller will see that change.
If you want to ensure that callers will not have their values changed, you can either try to use immutable values more often (tuple, frozenset, str, int, bool, NoneType), or be certain to take copies of your data before mutating it in place.
In summary, scoping isn't involved in your problem here. Mutability is.
Is that clear now?
Still not sure whats the 'correct' way to force the copy, there are
various suggestions here.
It differs by data type, but generally <type>(obj) will do the trick. For example list([1, 2]) and dict({1:2}) both return (shallow!) copies of their argument.
If, however, you have a tree of mutable objects and also you don't know a-priori which level of the tree you might modify, you need the copy module. That said, I've only needed this a handful of times (in 8 years of full-time python), and most of those ended up causing bugs. If you need this, it's a code smell, in my opinion.
The complexity of maintaining copies of mutable objects is the reason why there is a growing trend of using immutable objects by default. In the clojure language, all data types are immutable by default and mutability is treated as a special cases to be minimized.
If you need to work on a list or other object in a truly local context you need to explicitly make a copy or a deep copy of it.
from copy import copy
def fn(x):
y = copy(x)

staticmethod parameters in python 2.7 retain value across calls? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
“Least Astonishment” in Python: The Mutable Default Argument
Using python 2.7 I came across strange behaviour and I'm not sure how to explain it or if it even exists in any python docs. Using the code
class MyClass:
#staticmethod
def func(objects=[], a=None, b=None):
objects.append(a)
print 'objects: %s'%objects
print 'b: %s'%b
MyClass.func(a='one')
MyClass.func(a='two', b='foo')
MyClass.func(a='three')
I get the get output
objects: ['one']
b: None
objects: ['one', 'two']
b: foo
objects: ['one', 'two', 'three']
b: None
As you can see, the first list parameter (objects) of the method retains it's values across calls.. new values being appended to the last list even though in it's header declaration it has a default value of []. But the last parameter (b) does not retain it's value, it is reset to the default value between calls.
The expected (for me anyway) is that the objects parameter should be reset to it's default on any call to the method (like the b parameter is), but this doesn't seem to happen and only seems to occur on the first call.
Can anyone explain this behaviour? Is it a bug in this version of python or is it intended behaviour? Possibly something to do with the list reference being retained across calls but the string variable (b) is not? I'm very confused by this behaviour.
Thanks
It has nothing to do with being an staticmethod. This is a very common mistake in Python.
Functions in Python are first class objects, not just a block of code, so the empty list you assign to objects in the parameters is a real list attached to the function object. Everytime you call it and append something to that list, you are using the same list. It's easy to see it happening this way:
>>> def func(x, objects=[]):
... objects.append(x)
...
>>> func(1)
>>> func.func_defaults
([1],)
>>> func(2)
>>> func.func_defaults
([1, 2],)
>>> func(3)
>>> func.func_defaults
([1, 2, 3],)
func_defaults is the function object attribute that constains the default keyword parameters you set. See how the list is there and get changed?
The proper way to do what you want is:
class MyClass:
#staticmethod
def func(objects=None, a=None, b=None):
if objects is None:
objects = []
objects.append(a)
print 'objects: %s'%objects
print 'b: %s'%b
This behaviour is broadly known and by many treated as a feature, and is not staticmethod-specific. It works for every function. It happens when you assign mutable object as the default value to the argument.
See this answer on StackOverflow: Default Argument Gotchas / Dangers of Mutable Default arguments
If you understand mutability vs. immutability issue, think of the function / method like that:
when it is defined, the default argument values are assigned,
when it is invoked, the body is executed and if it changes mutable default argument value in-place, the changed value is then being used on every subsequent call where the default argument value has not been overriden by providing different value instead of it,

Categories

Resources