Can I instantiate a subclass object from the superclass - python

I have the following example code:
class A(object):
def __init__(self, id):
self.myid = id
def foo(self, x):
print 'foo', self.myid*x
class B(A):
def __init__(self, id):
self.myid = id
self.mybid = id*2
def bar(self, x):
print 'bar', self.myid, self.mybid, x
When used, the following could be generated:
>>> a = A(2)
>>> a.foo(10)
foo 20
>>>
>>> b = B(3)
>>> b.foo(10)
foo 30
>>> b.bar(12)
bar 3 6 12
Now lets say I have some more subclasses class C(A): and class D(A):. I also know that the id will always fit in either B, C or D, but never in 2 of them at the same time.
Now I would like to call A(23) and get an object of the correct subclass. Something like this:
>>> type(A(2))
<class '__main__.B'>
>>> type(A(22))
<class '__main__.D'>
>>> type(A(31))
<class '__main__.C'>
>>> type(A(12))
<class '__main__.B'>
Is this impossible or is it possible but just bad design? How should problems like this be solved?

You should rather implement Abstract Factory pattern, and your factory would then build any object you like, depending on provided parameters. That way your code will remain clean and extensible.
Any hack you could use to make it directly can be removed when you upgrade your interpreter version, since no one expects backwards compatibility to preserve such things.
EDIT: After a while I'm not sure if you should use Abstract Factory, or Factory Method pattern. It depends on the details of your code, so suit your needs.

Generally it's not such a good idea when a superclass has any knowledge of the subclasses.
Think about what you want to do from an OO point of view.
The superclass is providing common behaviour for all objects of that type, e.g. Animal. Then the subclass provides the specialisation of the behaviour, e.g. Dog.
Think of it in terms of an "isa" relationship, i.e. a Dog is an Animal.
An Animal is a Dog doesn't really make sense.
HTH
cheers,
Rob

I don't think you can change the type of the object, but you can create another class that will work like a factory for the subclasses. Something like this:
class LetterFactory(object):
#staticmethod
def getLetterObject(n):
if n == 1:
return A(n)
elif n == 2:
return B(n)
else:
return C(n)
a = LetterFactory.getLetterObject(1)
b = LetterFactory.getLetterObject(2)
...

Related

Is it possible to make the output of `type` return a different class?

So disclaimer: this question has piqued my curiosity a bit, and I'm asking this for purely educational purposes. More of a challenge for the Python gurus here I suppose!
Is it possible to make the output of type(foo) return a different value than the actual instance class? i.e. can it pose as an imposter and pass a check such as type(Foo()) is Bar?
#juanpa.arrivillaga made a suggestion of manually re-assigning __class__ on the instance, but that has the effect of changing how all other methods would be called. e.g.
class Foo:
def test(self):
return 1
class Bar:
def test(self):
return 2
foo = Foo()
foo.__class__ = Bar
print(type(foo) is Bar)
print(foo.test())
>>> True
>>> 2
The desired outputs would be True, 1. i.e The class returned in type is different than the instance, and the instance methods defined in the real class still get invoked.
No - the __class__ attribute is a fundamental information on the layout of all Python objects as "seen" on the C API level itself. And that is what is checked by the call to type.
That means: every Python object have a slot in its in-memory layout with space for a single pointer, to the Python object that is that object's class.
Even if you use ctypes or other means to override protection to that slot and change it from Python code (since modifying obj.__class__ with = is guarded at the C level), changing it effectively changes the object type: the value in the __class__ slot IS the object's class, and the test method would be picked from the class in there (Bar) in your example.
However there is more information here: in all documentation, type(obj) is regarded as equivalent as obj.__class__ - however, if the objects'class defines a descriptor with the name __class__, it is used when one uses the form obj.__class__. type(obj) however will check the instance's __class__ slot directly and return the true class.
So, this can "lie" to code using obj.__class__, but not type(obj):
class Bar:
def test(self):
return 2
class Foo:
def test(self):
return 1
#property
def __class__(self):
return Bar
Property on the metaclass
Trying to mess with creating a __class__ descriptor on the metaclass of Foo itself will be messy -- both type(Foo()) and repr(Foo()) will report an instance of Bar, but the "real" object class will be Foo. In a sense, yes, it makes type(Foo()) lie, but not in the way you were thinking about - type(Foo()) will output the repr of Bar(), but it is Foo's repr that is messed up, due to implementation details inside type.__call__:
In [73]: class M(type):
...: #property
...: def __class__(cls):
...: return Bar
...:
In [74]: class Foo(metaclass=M):
...: def test(self):
...: return 1
...:
In [75]: type(Foo())
Out[75]: <__main__.Bar at 0x55665b000578>
In [76]: type(Foo()) is Bar
Out[76]: False
In [77]: type(Foo()) is Foo
Out[77]: True
In [78]: Foo
Out[78]: <__main__.Bar at 0x55665b000578>
In [79]: Foo().test()
Out[79]: 1
In [80]: Bar().test()
Out[80]: 2
In [81]: type(Foo())().test()
Out[81]: 1
Modifying type itself
Since no one "imports" type from anywhere, and just use
the built-in type itself, it is possible to monkeypatch the builtin
type callable to report a false class - and it will work for all
Python code in the same process relying on the call to type:
original_type = __builtins__["type"] if isinstance("__builtins__", dict) else __builtins__.type
def type(obj_or_name, bases=None, attrs=None, **kwargs):
if bases is not None:
return original_type(obj_or_name, bases, attrs, **kwargs)
if hasattr(obj_or_name, "__fakeclass__"):
return getattr(obj_or_name, "__fakeclass__")
return original_type(obj_or_name)
if isinstance(__builtins__, dict):
__builtins__["type"] = type
else:
__builtins__.type = type
del type
There is one trick here I had not find in the docs: when acessing __builtins__ in a program, it works as a dictionary. However, in an interactive environment such as Python's Repl or Ipython, it is a
module - retrieving the original type and writting the modified
version to __builtins__ have to take that into account - the code above
works both ways.
And testing this (I imported the snippet above from a .py file on disk):
>>> class Bar:
... def test(self):
... return 2
...
>>> class Foo:
... def test(self):
... return 1
... __fakeclass__ = Bar
...
>>> type(Foo())
<class '__main__.Bar'>
>>>
>>> Foo().__class__
<class '__main__.Foo'>
>>> Foo().test()
1
Although this works for demonstration purposes, replacing the built-in type caused "dissonances" that proved fatal in a more complex environment such as IPython: Ipython will crash and terminate immediately if the snippet above is run.

how to make a copy of a class in python?

I have a class A
class A(object):
a = 1
def __init__(self):
self.b = 10
def foo(self):
print type(self).a
print self.b
Then I want to create a class B, which equivalent as A but with different name and value of class member a:
This is what I have tried:
class A(object):
a = 1
def __init__(self):
self.b = 10
def foo(self):
print type(self).a
print self.b
A_dummy = type('A_dummy',(object,),{})
A_attrs = {attr:getattr(A,attr) for attr in dir(A) if (not attr in dir(A_dummy))}
B = type('B',(object,),A_attrs)
B.a = 2
a = A()
a.foo()
b = B()
b.foo()
However I got an Error:
File "test.py", line 31, in main
b.foo()
TypeError: unbound method foo() must be called with A instance as first argument (got nothing instead)
So How I can cope with this sort of jobs (create a copy of an exists class)? Maybe a meta class is needed? But What I prefer is just a function FooCopyClass, such that:
B = FooCopyClass('B',A)
A.a = 10
B.a = 100
print A.a # get 10 as output
print B.a # get 100 as output
In this case, modifying the class member of B won't influence the A, vice versa.
The problem you're encountering is that looking up a method attribute on a Python 2 class creates an unbound method, it doesn't return the underlying raw function (on Python 3, unbound methods are abolished, and what you're attempting would work just fine). You need to bypass the descriptor protocol machinery that converts from function to unbound method. The easiest way is to use vars to grab the class's attribute dictionary directly:
# Make copy of A's attributes
Bvars = vars(A).copy()
# Modify the desired attribute
Bvars['a'] = 2
# Construct the new class from it
B = type('B', (object,), Bvars)
Equivalently, you could copy and initialize B in one step, then reassign B.a after:
# Still need to copy; can't initialize from the proxy type vars(SOMECLASS)
# returns to protect the class internals
B = type('B', (object,), vars(A).copy())
B.a = 2
Or for slightly non-idiomatic one-liner fun:
B = type('B', (object,), dict(vars(A), a=2))
Either way, when you're done:
B().foo()
will output:
2
10
as expected.
You may be trying to (1) create copies of classes for some reason for some real app:
in that case, try using copy.deepcopy - it includes the mechanisms to copy classes. Just change the copy __name__ attribute afterwards if needed. Works both in Python 2 or Python 3.
(2) Trying to learn and understand about Python internal class organization: in that case, there is no reason to fight with Python 2, as some wrinkles there were fixed for Python 3.
In any case, if you try using dir for fetching a class attributes, you will end up with more than you want - as dir also retrieves the methods and attributes of all superclasses. So, even if your method is made to work (in Python 2 that means getting the .im_func attribute of retrieved unbound methods, to use as raw functions on creating a new class), your class would have more methods than the original one.
Actually, both in Python 2 and Python 3, copying a class __dict__ will suffice. If you want mutable objects that are class attributes not to be shared, you should resort again to deepcopy. In Python 3:
class A(object):
b = []
def foo(self):
print(self.b)
from copy import deepcopy
def copy_class(cls, new_name):
new_cls = type(new_name, cls.__bases__, deepcopy(A.__dict__))
new_cls.__name__ = new_name
return new_cls
In Python 2, it would work almost the same, but there is no convenient way to get the explicit bases of an existing class (i.e. __bases__ is not set). You can use __mro__ for the same effect. The only thing is that all ancestor classes are passed in a hardcoded order as bases of the new class, and in a complex hierarchy you could have differences between the behaviors of B descendants and A descendants if multiple-inheritance is used.

Implementing sub-objects of the same class

My question is pretty general in principle. I have a class called Menu that has a list of items and one or more of those items can be either a string, or another instance of Menu. My code for that looks like this:
class Menu():
def __init__(self):
self.items = []
def add_item(self, item):
self.items.append(item)
def add_menu(self):
self.add_item(Menu())
As you can see I've used the actual name of the class Menu within one of it's functions. My question is if it's possible to do that without writing the actual name of the class, but rather by referring to what class it's defined in? For example, I've tried
self.add_item(super(self))
Which gives TypeError: super() argument 1 must be type, not Menu and also tried
self.add_item(super())
That runs without error, but the object it inserts is <super: <class 'Menu'>, <Menu object>>
I'm beginning to suspect i'm using the wrong tool for the job, and my question is what i'm doing wrong? and is referencing of the type I require even possible?
If it's relevant my python version is 3.5.3
Sure it is possible:
>>> class A:
... def create_instance(self):
... return type(self)()
...
>>> a1 = A()
>>> a2 = a1.add_self()
>>> a1
<__main__.A object at 0x1029c27f0>
>>> a2
<__main__.A object at 0x1029c28d0>
Note, of course, this is because:
>>> type(a1)
<class '__main__.A'>
>>> A
<class '__main__.A'>
>>> type(a1) is A
True
Alternatively, this may also be a use-case for classmethod:
>>> class A:
... #classmethod
... def make_instance(cls):
... return cls()
...
>>> a1 = A()
>>> a2 = a1.make_instance()
>>> a1
<__main__.A object at 0x1029c29b0>
>>> a2
<__main__.A object at 0x1029c27f0>
Now, it is perfectly reasonable for instances of a class to return new instances of the same class, but whether or not it is advisable in your case I don't think I have enough information to give an opinion. But it is certainly possible.
Wrong abstraction: consider creating a class MenuItem for example - that represents single line menu entries, or sub menus. Or some other kind of menu entry that you can't think of today.
In other words: good OOP is about creating helpful abstractions. Menu items can have many different shapes, thus the better answer is not to fit in raw strings, but think up an inheritance hierarchy that supports you solving your problem.

How to make one member of class to be both field and method?

I have one class A which extends B, and B has one method count(). Now I want to allow user call both A.count and A.count(). A.count means count is one field of A while A.count() means it is method derived from B.
This is impossible in Python, and here's why:
You can always assign a method (or really any function) to a variable and call it later.
hello = some_function
hello()
is semantically identical to
some_function()
So what would happen if you had an object of your class A called x:
x = A()
foo = x.count
foo()
The only way you could do this is by storing a special object in x.count that is callable and also turns into e.g. an integer when used in that way, but that is horrible and doesn't actually work according to specification.
As i said, it's not exactly impossible, as told by other answers. Lets see a didactic example:
class A(object):
class COUNT(object):
__val = 12345
def __call__(self, *args, **kwargs):
return self.__val
def __getattr__(self, item):
return self.__val
def __str__(self):
return str(self.__val)
count = COUNT()
if __name__ == '__main__':
your_inst = A()
print(your_inst.count)
# outputs: 12345
print(your_inst.count())
# outputs: 12345
As you may notice, you need to implement a series of things to accomplish that kind of behaviour. First, your class will need to implement the attribute count not as the value type that you intent, but as an instance of another class, which will have to implement, among other things (to make that class behave, by duck typing, as the type you intent) the __call__ method, that should return the same as you A class __getattr__, that way, the public attribute count will answer as a callable (your_inst.count()) or, as you call, a field (your_inst.count), the same way.
By the way, i don't know if the following is clear to you or not, but it may help you understand why it isn't as trivial as one may think it is to make count and count() behave the same way:
class A(object):
def count(self):
return 123
if __name__ == '__main__':
a = A()
print(type(a.count))
# outputs: <class 'method'>
print(type(a.count()))
# outputs: <class 'int'>
. invokes the a class __getattr__ to get the item count. a.count will return the referente to that function (python's function are first class objects), the second one, will do the same, but the parentheses will invoke the __call__ method from a.count.

Python Descriptor's - Documentation unclear

I was looking at python's descriptor's documentation here, and the statement which got me thinking is:
For objects, the machinery is in object.__getattribute__() which transforms b.x into type(b).__dict__['x'].__get__(b, type(b))
under a section named Invoking Descriptors.
Last part of the statement b.x into type(b).__dict__['x'].__get__(b, type(b)) is causing the conflict here. As per my understanding, if we lookup for attribute on an instance, then instance.__dict__is being looked up, and if we didn't find anything type(instance).__dict__ is referred.
In our example, b.x should then be evaluated as:
b.__dict__["x"].__get__(b, type(b)) instead of
type(b).__dict__['x'].__get__(b, type(b))
Is this understanding correct? Or am I going wrong somewhere in interpretation?
Any explanation would be helpful.
Thanks.
I am adding the second part as well:
Why instance attributes does not respect the descriptor protocol? For ex: referring to code below:
>>> class Desc(object):
... def __get__(self, obj, type):
... return 1000
... def __set__(self, obj, value):
... raise AttributeError
...
>>>
>>> class Test(object):
... def __init__(self,num):
... self.num = num
... self.desc = Desc()
...
>>>
>>> t = Test(10)
>>> print "Desc details are ", t.desc
Desc details are <__main__.Desc object at 0x7f746d647890>
Thanks for helping me out.
Your understanding is incorrect. x most likely does not appear in the instance's dict at all; the descriptor object appears in the class's dict or the dict of one of the superclasses.
Let's use an example:
class Foo(object):
#property
def x(self):
return 0
def y(self):
return 1
x = Foo()
x.__dict__['x'] = 2
x.__dict__['y'] = 3
Foo.x and Foo.y are both descriptors. (Properties and functions both implement the descriptor protocol.)
When we access x.x:
>>> x.x
0
We do not get the value from x's dict. Instead, since Python finds a data descriptor by the name of x in Foo.__dict__, it calls
Foo.__dict__['x'].__get__(x, Foo)
and returns the result. The data descriptor wins over the instance dict.
On the other hand, if we try x.y:
>>> x.y
3
we get 3, rather than a bound method object. Functions don't have __set__ or __delete__, so the instance dict overrides them.
As for the new Part 2 to your question, descriptors don't function in the instance dict. Consider what would happen if they did:
class Foo(object):
#property
def bar(self):
return 4
Foo.bar = 3
If descriptors functioned in the instance dict, then the assignment to Foo.bar would find a descriptor in Foo's dict and call Foo.__dict__['bar'].__set__. The __set__ method of the descriptor would have to handle setting the attribute on both the class and the instance, and it would have to tell the difference somehow, even in the face of metaclasses. There just isn't a compelling reason to complicate the protocol this way.

Categories

Resources