I've imported a package that provides me with a class and a wrapper function that creates an instance of that class.
For example:
class Foo:
def __init__(self, a, b):
self.a = a
self.b = b
def print_a(self):
print(self.a)
def print_b(self):
print(self.b)
def makeFoo(x, y):
a = x + y
b = x - y
return Foo(a, b)
I want to have a similar class NamedFoo, that has the same properties/methods, also has a name property, and with a constructor that calls makeFoo. I figure that this should be solved using inheritance, with NamedFoo being a subclass of Foo. However, I don't know how to make the NamedFoo constructor utilize makeFoo correctly:
class NamedFoo(Foo):
def __init__(self, x, y, name):
# ???
# Foo = makeFoo(x, y) ??
# self.Foo = makeFoo(x, y) ??
self.name = name
def printName(self):
print(self.name)
Example data:
myNamedFoo = NamedFoo(2,5,"first")
myNamedFoo.print_a() # (From makeFoo: a = x + y) ==> 2 + 5 = 7
myNamedFoo.print_b() # (From makeFoo: a = x - y) ==> 2 - 5 = -3
I'm not too familiar with object-oriented programming, so I might just be using the wrong search terms, but I haven't found anything similar to what I need. Is this possible, and if so how can I do it?
I'm also not sure if this is an X/Y problem, but here are the alternatives I've considered and why I don't think they're ideal:
Composition of Foo and the property name: It's ugly and doesn't seem right.
Manually adding the name property to each Foo object, and perhaps wrapping it in a function: Doesn't quite have the elegance of a one liner constructor.
Rewriting the constructor for the Foo class, to have the same code as what's in makeFoo: makeFoo is rather complex and needs to do a lot of setup, and this would in any case lead to code duplication
In the NamedFoo constructor, create an instance of the Foo class from the makeFoo wrapper function. Pass this instance's attributes to the super().__init__.
class NamedFoo(Foo):
def __init__(self, x, y, name):
_foo = makeFoo(x,y) # use the wrapper to handle complex logic from input params
super().__init__(_foo.a,_foo.b) # pass the properly derived Foo attributes to the superclass constructor
self.name = name
This way, we're instantiating NamedFoo from whatever magic happens within the makeFoo function. Pass your x and y to that, which creates a throwaway Foo instance (so we can have it properly constructed with whatever complex logic resides in the helper function). The final NamedFoo class is then instantiated from the Foo constructor.
i think this should work..
class Foo:
def __init__(self,a,b):
self.a = a + b
self.b = a - b
def print_a(self):
print(self.a)
def print_b(self):
print(self.b)
class NamedFoo(Foo):
def __init__(self,a,b,name):
super().__init__(a,b)
self.name = name
def main():
example = NamedFoo(2,5,"first")
example.print_a()
example.print_b()
main()
this prints out
7
-3
or if you really want to use a function to create self.a and self.b use this:
class Foo:
def __init__(self, a, b):
self.a, self.b = make_foo(a,b)
def print_a(self):
print(self.a)
def print_b(self):
print(self.b)
class NamedFoo(Foo):
def __init__(self, a,b,name):
super().__init__(a,b)
self.name = name
def make_foo(x,y):
return x+y, x-y
def main():
example = NamedFoo(2,5,"first")
example.print_a()
example.print_b()
main()
Related
I am trying to find a good way for returning a (new) class object in class method that can be extended as well.
I have a class (classA) which has among other methods, a method that returns a new classA object after some processing
class classA:
def __init__(): ...
def methodX(self, **kwargs):
process data
return classA(new params)
Now, I am extending this class to another classB. I need methodX to do the same, but return classB this time, instead of classA
class classB(classA):
def __init__(self, params):
super().__init__(params)
self.newParams = XYZ
def methodX(self, **kwargs):
???
This may be something trivial but I simply cannot figure it out. In the end I dont want to rewrite the methodX each time the class gets extended.
Thank you for your time.
Use the __class__ attribute like this:
class A:
def __init__(self, **kwargs):
self.kwargs = kwargs
def methodX(self, **kwargs):
#do stuff with kwargs
return self.__class__(**kwargs)
def __repr__(self):
return f'{self.__class__}({self.kwargs})'
class B(A):
pass
a = A(foo='bar')
ax = a.methodX(gee='whiz')
b = B(yee='haw')
bx = b.methodX(cool='beans')
print(a)
print(ax)
print(b)
print(bx)
class classA:
def __init__(self, x):
self.x = x
def createNew(self, y):
t = type(self)
return t(y)
class classB(classA):
def __init__(self, params):
super().__init__(params)
a = classA(1)
newA = a.createNew(2)
b = classB(1)
newB = b.createNew(2)
print(type(newB))
# <class '__main__.classB'>
I want to propose what I think is the cleanest approach, albeit similar to existing answers. The problem feels like a good fit for a class method:
class A:
#classmethod
def method_x(cls, **kwargs):
return cls(<init params>)
Using the #classmethod decorator ensures that the first input (traditionally named cls) will refer to the Class to which the method belongs, rather than the instance.
(usually we call the first method input self and this refers to the instance to which the method belongs)
Because cls refers to A, rather than an instance of A, we can call cls() as we would call A().
However, in a class that inherits from A, cls will instead refer to the child class, as required:
class A:
def __init__(self, x):
self.x = x
#classmethod
def make_new(cls, **kwargs):
y = kwargs["y"]
return cls(y) # returns A(y) here
class B(A):
def __init__(self, x):
super().__init__(x)
self.z = 3 * x
inst = B(1).make_new(y=7)
print(inst.x, inst.z)
And now you can expect that print statement to produce 7 21.
That inst.z exists should confirm for you that the make_new call (which was only defined on A and inherited unaltered by B) has indeed made an instance of B.
However, there's something I must point out. Inheriting the unaltered make_new method only works because the __init__ method on B has the same call signature as the method on A. If this weren't the case then the call to cls might have had to be altered.
This can be circumvented by allowing **kwargs on the __init__ method and passing generic **kwargs into cls() in the parent class:
class A:
def __init__(self, **kwargs):
self.x = kwargs["x"]
#classmethod
def make_new(cls, **kwargs):
return cls(**kwargs)
class B(A):
def __init__(self, x, w):
super().__init__(x=x)
self.w = w
inst = B(1,2).make_new(x="spam", w="spam")
print(inst.x, inst.w)
Here we were able to give B a different (more restrictive!) signature.
This illustrates a general principle, which is that parent classes will typically be more abstract/less specific than their children.
It follows that, if you want two classes that substantially share behaviour but which do quite specific different things, it will be better to create three classes: one rather abstract one that defines the behaviour-in-common, and two children that give you the specific behaviours you want.
I don't understand why the code uses the print_me method from Class D, and not the method in class A.
I did some testing using print-statements and can see that it reads the print_me-method of class D before initialising class A, but I don't understand why it doesn't do the same for class A.
class A:
name = "Alfa"
def __init__(self, foo):
self.foo = foo
foo = 100
self.print_me()
def print_me(self):
print(self.name, self.foo)
class B(A):
name = "Beta"
def __init__(self, bar = 40):
self.bar = bar
print(self.name, bar)
class C:
name = "Charlie"
class D(A, C):
name = "Delta"
def __init__(self, val):
A.__init__(self, val)
def print_me(self):
print(self.name, "says", self.foo)
d = D(60)
The output is: Delta says 60
I thought it would be: Delta 60
Because the self you are passing to the __init__ of A is still an instance of D, not A. And the function A.__init__ is calling self.print_me, which belongs to D.
If you do a = A(); a.print_me() you'd get what you expect.
Important note: The __init__ method in python is not the actual constructor, it is just a method that is automatically called after the actual construction of the object. However, when you call it yourself, it works just like any other method.
This question already has answers here:
Python: Bind an Unbound Method?
(5 answers)
Closed 3 years ago.
Let us suppose the following situation: I have a class with some initial values. Furthermore, I want to provide the possibility to pass an user-defined method, when initializing a new object. The user knows about the attributes of the class in advance and may want to consider them in the function, for instance:
class some_class():
def __init__(self, some_method):
# some initial values
self.a = 8
self.b = 12
# initializing a new object with a user-specific method
self.some_method = some_method
def some_method(self):
pass # this method shall be specified by the user
# user-specific function
def some_function():
return self.a + self.b
some_object = some_class(some_method = some_function)
print(some_object.some_method())
Of course, the given example does not work, but I hope it shows what I want to do. I am searching for a way to define a function outside the class, which refers to the attribute of an object after it was passed during initialization.
What I try to avoid is to solve the problem with fixed name conventions, for instance:
class some_class():
def __init__(self, some_method):
self.a = 8
self.b = 12
self.some_method = some_method
def some_method(self):
pass
def some_function():
return some_object.a + some_object.b # -> fixed names to avoid the problem
some_object = some_class(some_method = some_function)
print(some_object.some_method())
I think what I need is a kind of placeholder or alternative to self. Does anybody has an idea?
This works, although not sure it is the most elegant way to achieve what you want:
class some_class():
def __init__(self, some_method):
self.a = 8
self.b = 12
self.some_method_func = some_method
def some_method(self):
return self.some_method_func(self)
def some_function(self):
return self.a + self.b
some_object = some_class(some_method = some_function)
print(some_object.some_method())
If I'm reading this right, then the easy option is just to do the following:
class some_class():
def __init__(self):
self.a = 8
self.b = 12
def some_method(self):
pass
def my_method(self):
return self.a + self.b
# Redefine the class method
some_class.some_method = my_method
# Or, if you only want to do it for a specific instance:
instance = some_class()
instance.some_method = my_method
The usual way to do this 'properly' though is with sub-classing.
You need to get the class and the caller of the class to agree to a contract. The contract is that the class will pass in the instance of the class to the function, and the function must accept that as an argument.
class some_class():
def some_method(self):
return self.some_method_func(self)
def some_function(obj):
return obj.a + obj.b
I am trying to make a python decorator that adds attributes to methods of a class so that I can access and modify those attributes from within the method itself. The decorator code is
from types import MethodType
class attribute(object):
def __init__(self, **attributes):
self.attributes = attributes
def __call__(self, function):
class override(object):
def __init__(self, function, attributes):
self.__function = function
for att in attributes:
setattr(self, att, attributes[att])
def __call__(self, *args, **kwargs):
return self.__function(*args, **kwargs)
def __get__(self, instance, owner):
return MethodType(self, instance, owner)
retval = override(function, self.attributes)
return retval
I tried this decorator on the toy example that follows.
class bar(object):
#attribute(a=2)
def foo(self):
print self.foo.a
self.foo.a = 1
Though I am able to access the value of attribute 'a' from within foo(), I can't set it to another value. Indeed, when I call bar().foo(), I get the following AttributeError.
AttributeError: 'instancemethod' object has no attribute 'a'
Why is this? More importantly how can I achieve my goal?
Edit
Just to be more specific, I am trying to find a simple way to implement static variable that are located within class methods. Continuing from the example above, I would like instantiate b = bar(), call both foo() and doo() methods and then access b.foo.a and b.doo.a later on.
class bar(object):
#attribute(a=2)
def foo(self):
self.foo.a = 1
#attribute(a=4)
def doo(self):
self.foo.a = 3
The best way to do this is to not do it at all.
First of all, there is no need for an attribute decorator; you can just assign it yourself:
class bar(object):
def foo(self):
print self.foo.a
self.foo.a = 1
foo.a = 2
However, this still encounters the same errors. You need to do:
self.foo.__dict__['a'] = 1
You can instead use a metaclass...but that gets messy quickly.
On the other hand, there are cleaner alternatives.
You can use defaults:
def foo(self, a):
print a[0]
a[0] = 2
foo.func_defaults = foo.func_defaults[:-1] + ([2],)
Of course, my preferred way is to avoid this altogether and use a callable class ("functor" in C++ words):
class bar(object):
def __init__(self):
self.foo = self.foo_method(self)
class foo_method(object):
def __init__(self, bar):
self.bar = bar
self.a = 2
def __call__(self):
print self.a
self.a = 1
Or just use classic class attributes:
class bar(object):
def __init__(self):
self.a = 1
def foo(self):
print self.a
self.a = 2
If it's that you want to hide a from derived classes, use whatever private attributes are called in Python terminology:
class bar(object):
def __init__(self):
self.__a = 1 # this will be implicitly mangled as __bar__a or similar
def foo(self):
print self.__a
self.__a = 2
EDIT: You want static attributes?
class bar(object):
a = 1
def foo(self):
print self.a
self.a = 2
EDIT 2: If you want static attributes visible to only the current function, you can use PyExt's modify_function:
import pyext
def wrap_mod(*args, **kw):
def inner(f):
return pyext.modify_function(f, *args, **kw)
return inner
class bar(object):
#wrap_mod(globals={'a': [1]})
def foo(self):
print a[0]
a[0] = 2
It's slightly ugly and hackish. But it works.
My recommendation would be just to use double underscores:
class bar(object):
__a = 1
def foo(self):
print self.__a
self.__a = 2
Although this is visible to the other functions, it's invisible to anything else (actually, it's there, but it's mangled).
FINAL EDIT: Use this:
import pyext
def wrap_mod(*args, **kw):
def inner(f):
return pyext.modify_function(f, *args, **kw)
return inner
class bar(object):
#wrap_mod(globals={'a': [1]})
def foo(self):
print a[0]
a[0] = 2
foo.a = foo.func_globals['a']
b = bar()
b.foo() # prints 1
b.foo() # prints 2
# external access
b.foo.a[0] = 77
b.foo() # prints 77
While You can accomplish Your goal by replacing self.foo.a = 1 with self.foo.__dict__['a'] = 1 it is generally not recommended.
If you are using Python2 - (and not Python3) - whenever you retrieve a method from an instance, a new instance method object is created which is a wrapper to the original function defined in the class body.
The instance method is a rather transparent proxy to the function - you can retrieve the function's attributes through it, but not set them - that is why setting an item in self.foo.__dict__ works.
Alternatively you can reach the function object itself using: self.foo.im_func - the im_func attribute of instance methods point the underlying function.
Based on other contributors's answers, I came up with the following workaround. First, wrap a dictionnary in a class resolving non-existant attributes to the wrapped dictionnary such as the following code.
class DictWrapper(object):
def __init__(self, d):
self.d = d
def __getattr__(self, key):
return self.d[key]
Credits to Lucas Jones for this code.
Then implement a addstatic decorator with a statics attribute that will store the static attributes.
class addstatic(object):
def __init__(self, **statics):
self.statics = statics
def __call__(self, function):
class override(object):
def __init__(self, function, statics):
self.__function = function
self.statics = DictWrapper(statics)
def __call__(self, *args, **kwargs):
return self.__function(*args, **kwargs)
def __get__(self, instance, objtype):
from types import MethodType
return MethodType(self, instance)
retval = override(function, self.statics)
return retval
The following code is an example of how the addstatic decorator can be used on methods.
class bar(object):
#attribute(a=2, b=3)
def foo(self):
self.foo.statics.a = 1
self.foo.statics.b = 2
Then, playing with an instance of the bar class yields :
>>> b = bar()
>>> b.foo.statics.a
2
>>> b.foo.statics.b
3
>>> b.foo()
>>> b.foo.statics.a
3
>>> b.foo.statics.b
5
The reason for using this statics dictionnary follows jsbueno's answer which suggest that what I want would require overloading the dot operator of and instance method wrapping the foo function, which I am not sure is possible. Of course, the method's attribute could be set in self.foo.__dict__, but since it not recommended (as suggested by brainovergrow), I came up with this workaround. I am not certain this would be recommended either and I guess it is up for comments.
I want a python class that has a nested class where the inner class can access the members of the outer class. I understand that normal nesting doesn't even require that the outer class has an instance. I have some code that seems to generate the results I desire and I want feedback on style and unforeseen complications
Code:
class A():
def __init__(self,x):
self.x = x
self.B = self.classBdef()
def classBdef(self):
parent = self
class B():
def out(self):
print parent.x
return B
Output:
>>> a = A(5)
>>> b = a.B()
>>> b.out()
5
>>> a.x = 7
>>> b.out()
7
So, A has an inner class B, which can only be created from an instance of A. Then B has access to all the members of A through the parent variable.
This doesn't look very good to me. classBdef is a class factory method. Usually (and seldomly) you would use these to create custom classes e.g. a class with a custom super class:
def class_factory(superclass):
class CustomClass(superclass):
def custom_method(self):
pass
return CustomClass
But your construct doesn't make use of a customization. In fact it puts stuff of A into B and couples them tightly. If B needs to know about some A variable then make a method call with parameters or instantiate a B object with a reference to the A object.
Unless there is a specific reason or problem you need to solve, it would be much easier and clearer to just make a normal factory method giving a B object in A instead of stuff like b = a.B().
class B(object):
def __init__(self, a):
self.a = a
def out(self):
print self.a.x
class A(object):
def __init__(self,x):
self.x = x
def create_b(self):
return B(self)
a = A()
b = a.create_b()
b.out()
I don't think what you're trying to do is a very good idea. "Inner" classes in python have absolutely no special relationship with their "outer" class, if you bother to define one inside of another. It is exactly the same to say:
class A(object):
class B(object):
pass
as it is to say:
class B(object): pass
class A(object): pass
A.B = B
del B
That said, it is possible to accomplish something like what you're describing, by making your "inner" class into a descriptor, by defining __get__() on its metaclass. I recommend against doing this -- it's too complicated and yields little benefit.
class ParentBindingType(type):
def __get__(cls, inst, instcls):
return type(cls.__name__, (cls,), {'parent': inst})
def __repr__(cls):
return "<class '%s.%s' parent=%r>" % (cls.__module__,
cls.__name__, getattr(cls, 'parent', None))
class B(object):
__metaclass__ = ParentBindingType
def out(self):
print self.parent.x
class A(object):
_B = B
def __init__(self,x):
self.x = x
self.B = self._B
a = A(5)
print a.B
b = a.B()
b.out()
a.x = 7
b.out()
printing:
<class '__main__.B' parent=<__main__.A object at 0x85c90>>
5
7