from collections import namedtuple
FooT = namedtuple('Foo', 'foo bar')
def Foo(foo=None, bar=None):
return FooT(foo,bar)
foo = Foo()
foo.foo = 29
throws attribute error
So, my use case is a datastructure which have optional fields.. but should be able to modify it if desired..
A defaultdict should be appropriate for what you want. It works by providing it a function on construction which it calls every time an unset element is accessed. Here's a demo:
>>> from collections import defaultdict
>>> d = defaultdict(lambda:None)
>>> d['foo'] = 10
>>> d['bar'] = 5
>>> print d['baz']
None
>>> d['baz'] = 15
>>> print d['baz']
15
Tuples are, by definition, immutable. Namedtuples follow this pattern as well.
In python3 it appears there is a SimpleNamespace [1] that you can use. If you want to simply use a read/write datastructure though you could create a class and put constraints on its members.
[1] - Why Python does not support record type i.e. mutable namedtuple
A namedtuple, like a tuple is not modifiable. Since the question is about namedtuple, in some case you may find ok (or sometimes even preferable) to create a new object with the _replace method. Of course the other references to the same object will be unchanged.
from collections import namedtuple
FooT = namedtuple('Foo', 'foo bar')
def Foo(foo=None, bar=None):
return FooT(foo,bar)
foo = Foo()
foo = foo._replace(foo=29)
For a slight variation on the above answers, why not extend the advice in the tutorial and have a class that returns None for any undefined attribute? For example:
class Foo(object):
def __getattr__(self, name):
return None
This is much the same as a defaultdict, but accessible via direct attributes much like a named tuple.
Why not just use a class?
class Foo(object):
def __init__(foo=None, bar=None):
self.foo = foo
self.bar = bar
foo = Foo()
foo.foo = 29
Related
I have a class object that receives some data. Based on a condition, I need that data to change, but only under that condition. Problem I'm running into is that when I call dict.update() , it updates the original variable too. So a subsequent request comes in, and now that original variable is "tainted" so to speak, and is using overridden information that it shouldn't have.
Assuming a dictionary like this:
my_attributes = {"test": True}
And some logic like this:
class MyClass(object):
def __init__(self, attributes):
if my_condition():
attributes.update({"test": False})
The end result:
>>> my_attributes
{'test': False}
So, the next time MyClass is used, those root attributes are still overridden.
I've seemingly gotten around this problem by re-defining attributes:
class MyClass(object):
def __init__(self, attributes):
if my_condition():
attributes = {}
attributes.update(my_attributes)
attributes.update({"test": False})
This has seemed to get around the problem, but I'm not entirely sure this is a good, or even the right, solution to the issue.
Something like this:
class MyClass(object):
#staticmethod
def my_condition():
return True
def __init__(self, attributes):
self.attributes = {**attributes}
if MyClass.my_condition():
self.attributes["test"] = False
my_attributes = {"test": True}
cls_obj = MyClass(my_attributes)
print("my_attributes:", my_attributes, "class.attributes:", cls_obj.attributes)
Output:
my_attributes: {'test': True} class.attributes: {'test': False}
You pass a (mutable) dictionary reference to an object. Now, you have two owners of the reference: the caller of the constructor (the "external world" for the object) and the object itself. These two owners may modify the dictionary. Here is an illustration:
>>> d = {}
>>> def ctor(d): return [d] # just build a list with one element
>>> L = ctor(d)
>>> d[1] = 2
>>> L
[{1: 2}]
>>> L[0][3] = 4
>>> d
{1: 2, 3: 4}
How do you prevent this? Both owners want to protect themselves from wild mutation of their variables. If I were the external world, I would like to pass an immutable reference to the dict, but Python does not provide immutable references for dicts. A copy is the way to go:
>>> d = {}
>>> L = ctor(dict(d)) # I don't give you *my* d
>>> d[1] = 2
>>> L
[{}]
If I were the object, I would do a copy of the object before using it:
>>> d = {}
>>> def ctor2(d): return [dict(d)] # to be sure L[0] is *mine*!
>>> L = ctor2(dict(d)) # I don't give you *my* d
But now you have made two copies of the object just because everyone is scared to see its variables modified by the other. And the issue is still here if the dictionary contains (mutable) references.
The solution is to spell out the responsibilities of each one:
class MyClass(object):
"""Usage: MyClass(attributes).do_something() where attributes is a mapping.
The mapping won't be modified"""
...
Note that this is the common expected behavior: unless specified, the arguments of a function/contructor are not modified. We avoid side effect when possible, but that's not always the case: see list.sort() vs sorted(...).
Hence I think your solution is good. But I prefer to avoid too much logic in the constructor:
class MyClass(object):
#staticmethod
def create_prod(attributes):
attributes = dict(attributes)
attributes.update({"test": False})
return MyClass(attributes)
#staticmethod
def create_test(attributes):
return MyClass(attributes)
def __init__(self, attributes):
self._attributes = attributes # MyClass won't modify attributes
I m not sure my question title is correct but :
My problem is :
When I created a new class called classA and i did deepcopy to another name called classB and did equality and identity test:
Here is my first snippet: Creating class using type
>>> import copy
>>> classA = type('X', (object,), {})
>>> classB = copy.deepcopy(classA)
>>> classB is classA
True
>>> classB == classA
True
>>>
Second snippet: testing on class creating by using keywords 'class':
>>> class X(object): pass
...
>>> import copy
>>> Y = copy.deepcopy(X)
>>> Y is X
True
>>> Y == X
True
>>>
Third snippet: And when i do the same testing on list object:
>>> import copy
>>> objA = list()
>>> objB = copy.deepcopy(objA)
>>> objB == objA
True
>>> objB is objA
False
>>>
Why the first and remaining two are different ? Could someone please explain me ?
It is documented behavior:
This module does not copy types like module, method, stack trace, stack frame, file, socket, window, array, or any similar types. It does “copy” functions and classes (shallow and deeply), by returning the original object unchanged; this is compatible with the way these are treated by the pickle module.
As to why it was done that way, presumably it was because people don't have a lot of need for having multiple identical but distinct classes.
In Python, if you have a dictionary
d = {'foo': 1, 'bar': False}
You can apply this onto a function that accept foo and bar keyword arguments by
def func(foo, bar):
# Do something complicated here
pass
func(**d)
But if instead, I wanted to call func with the namedtuple defined below:
from collections import namedtuple
Record = namedtuple('Record', 'foo bar')
r = Record(foo=1, bar=False)
func(r) # !!! this will not work
What's the syntax for this?
A namedtuple instance has a ._asdict() method:
func(**r._asdict())
but if the namedtuple attributes are in the same order as the arguments of the function, you could just apply it as a sequence instead:
func(*r)
Here the two values of the namedtuple are applied, in order, to the keyword arguments in the function. Those two arguments can be addressed as positional arguments still, after all!
For your sample function, both work:
>>> def func(foo, bar):
... print foo, bar
...
>>> from collections import namedtuple
>>> Record = namedtuple('Record', 'foo bar')
>>> r = Record(foo=1, bar=False)
>>> func(**r._asdict())
1 False
>>> func(*r)
1 False
To create a property in a class you simply do self.property = value. I want to be able to have the properties in this class completely dependent on a parameter. Let us call this class Foo.
instances of the Foo class would take a list of tuples:
l = [("first","foo"),("second","bar"),("anything","you get the point")]
bar = Foo(l)
now the instance of the Foo class we assigned to bar would have the following properties:
bar.first
#foo
bar.second
#bar
bar.anything
#you get the point
Is this even remotely possible? How?
I thought of another answer you could use using type(). It's completely different to my current answer so I've added a different answer:
>>> bar = type('Foo', (), dict(l))()
>>> bar.first
'foo'
>>> bar.second
'bar'
>>> bar.anything
'you get the point'
type() returns a class, not an instance, hence the extra () at the end.
These are called attributes, rather than properties. With that in mind, the method setattr() becomes more obvious:
class Foo(object):
def __init__(self, l):
for k, v in l:
setattr(self, k, v)
This takes each key-value pair in l and sets the attribute k on the new instance of Foo (self) to v.
Using your example:
l = [("first","foo"),("second","bar"),("anything","you get the point")]
bar = Foo(l)
print bar.first
#foo
print bar.second
#bar
print bar.anything
#you get the point
There are two ways to do this:
Use setattr like this. This approach is feasible if you only need to process the initial list once, when the object is constructed.
class Foo:
def __init__(self, l):
for (a, b) in l:
setattr(self, a, b)
Define a custom __getattr__ method. Preferably, you would store the properties in a dict for faster lookup, but you can also search the original list. This is better if you want to later modify the list and want this to be reflected in the attributes of the object.
class Foo:
def __init__(self, l):
self.l = l
def __getattr__(self, name):
for a in self.l:
if a[0] == name:
return a[1]
return None
Something like this?
>>> class Foo:
... def __init__(self, mylist):
... for k, v in mylist:
... setattr(self, k, v)
...
>>> l = [("first","foo"),("second","bar"),("anything","you get the point")]
>>> bar = Foo(l)
>>> bar.first
'foo'
>>> bar.second
'bar'
>>> bar.anything
'you get the point'
Using setattr you can do this by passing in the list and just iterating through it.
setattr works.
>>> class Foo:
... def __init__(self,yahoo):
... for k,v in yahoo:
... setattr(self,k,v)
...
>>> l = [("first","foo"),("second","bar"),("anything","you get the point")]
>>> bar = Foo(l)
>>> print bar.first
foo
>>> print bar.second
bar
>>> print bar.anything
you get the point
Dictionaries and lists defined directly under the class definition act as static (e.g. this question)
How come other variables such as integer do not?
>>> class Foo():
bar=1
>>> a=Foo()
>>> b=Foo()
>>> a.bar=4
>>> b.bar
1
>>> class Foo():
bar={}
>>> a=Foo()
>>> b=Foo()
>>> a.bar[7]=8
>>> b.bar
{7: 8}
They are all class variables. Except for when you assigned a.bar=4 creating an instance variable. Basically Python has a lookup order on attributes. It goes:
instance -> class -> parent classes in MRO order (left to right)
So if you have
class Foo(object):
bar = 1
This is a variable on the class Foo. Once you do
a = Foo()
a.bar = 2
you have created a new variable on the object a with the name bar. If you look at a.__class__.bar you will still see 1, but it is effectively hidden due to the order mentioned earlier.
The dict you created is at the class-level so it is shared between all instances of that class.
When you make the assignment
>>> a.bar=4
you are rebinding the Foo.bar name to 4, which is a new instance of an integer. On the other hand,
>>> a.bar[7]=8
Does not rebind Foo.bar to anything different, and simply modifies the dictionary that the name refers to.
If you do
>>> a.bar = {7: 8}
then you will rebind Foo.bar to a new dictionary.
Assuming a is an instance of A and A has a class attribute bar:
a.bar = 4 creates an instance attribute bar which hides the class bar in the context of a
a.bar[4] = 2 only modifies the object that the class-level bar binds to (assuming it supports indexing)
a.bar += 1 - This one is nasty. If the class-level bar supports the += operation (e.g. by implementing __iadd__()), then it is modified in-place and no object-level attribute is created. Otherwise it is equivalent to a.bar = a.bar + 1 and a new instance attribute bar is created.