I am trying to write a GTK widget in Python that is a subclass of gtk.Bin and am not sure how to go about instantiating it. The first few lines of my class look like:
class Completer(gtk.Bin):
def __init__(self, exts)):
gtk.Container.__init__(self)
child = gtk.VBox(spacing=15)
self.add(child)
I'm not sure how to set the child attribute, hence the code for that. But it hangs up on the line gtk.Container.__init__(self) with the message:
File "C:\Users\462974\Documents\Local Sandbox\tools\python\packages\GUI\tools\SNCompleter.py", line 133, in __init__
gtk.Container.__init__(self)
TypeError: cannot create instance of abstract (non-instantiable) type `GtkBin'
It also happens if I call gtk.Bin.__init__. I'm not sure how to initialize this subclass, but there is presumably a way since GTK does have usable subclasses of gtk.Bin.
You need to register a new gtype for your widget, otherwise it will use the same as the super class, and since it's an abstract class, you won't be able to instantiate it (as the exception indicates).
There're two ways of registering a new gtype:
Using gobject.type_register.
Setting the __gtype_name__ class variable in your class.
Here's an example using the second choice (since I believe is more straigth forward):
class Completer(gtk.Bin):
__gtype_name__= "Completer"
def __init__(self, exts, *args, **kwargs):
super(Completer, self).__init__(*args, **kwargs)
child = gtk.VBox(spacing=15)
self.add(child)
Related
Trying to change singleton using metaclass of Python 2 to Python 3, __new__ returns:
[ ERROR ] Error in file Importing test library 'C:\Users\TestTabs.py' failed: __class__ not set defining 'BrowserDriver' as <class 'BrowserDriver.BrowserDriver'>. Was __classcell__ propagated to type.__new__?
CODE:
class Singleton(type):
_instance = None
def __new__(cls, *args, **kwargs):
print('Newtest')
if cls._instance is None:
Singleton._instance = type.__new__(cls, *args, **kwargs)
return Singleton._instance
This one is called:
class BrowserDriver(metaclass=Singleton)
first: you should not be using a metaclass for having a singleton
Second: your "singleton" code is broken, even if it would work:
By luck it crossed the way of a new mechanism used in class creation, which requires type.__new__ to receive the "class cell" when creating a new class, and this was detected.
So, the misterious __class__ cell will exit if any method in your class uses a call to super(). Python will create a rathr magic __class__ variable that will receive a reference to the class that will be created, when the class body execution ends. At that point, the metaclass.__new__ is called. When the call to metaclass.__new__ returns, the Python runtime expects that the __class__ magic variable for that class is now "filled in" with a reference to the class itself.
This is for a working class creation - now we come to the bug in your code:
I don't know where you got this "singleton metaclass code" at all, but it is broken: (if it would work), it creates ONE SINGLE CLASS, for all classes using this metaclass - and not, as probably was desired, allow one single-instance of each class using this metaclass. (as the new class body do not have its __class__ attribute set, you get the error you described under Python 3.8)
In other words: any classes past the first one using this metaclass is simply ignored, and not used by the program at all.
The (overkill) idea of using a metaclass to create singleton-enforcing classes is, yes, to allow a single-instance of a class, but the cache for the single instance should be set in the class itself, not on the metaclass - or in an attribute in the metaclass that holds one instance for each class created, like a dictionary would. A simple class attribute of the metaclass as featured in this code just makes classes past the first be ignored.
So, to fix that using metaclasses, the cache logic should be in the metaclass __call__ method, not in its __new__ method -
This is the expressly not recommended, but working, metaclass to enforce singletons:
class SingletonEnforcingmeta(type):
def __call__(cls, *args, **kw):
# check "__dict__" entry insead of "hasattr" - allows inheritance
# and one instance per subclass
if "_instance" not in cls.__dict__:
cls._instance = super().__call__(*args, **kw)
return cls._instance
But, as I wrote above, it is overkill to have a metaclass if you just once a singleton - the instantiation mechanism in __new__ itself is enough for creating a single-instance cache.
But before doing that - on should think: is a "singleton enforcing class really necessary" ? This is Python - the flexible structure and "consenting adults" mindset of the language can have you simply create an instance of your class in the same namespace you created the class itself - and just use that single instance from that point on.
Actually, if your single-instance have the same name the class have, one can't even create a new instance by accident, as the class itself will be reachable only indirectly. That is:
nice thing to do: if you need a singleton, create a singleton, not a 'singleton-enforcing-class
class BrowserDriver(...):
# normal code for the class here
...
BrowserDriver = BrowserDriver()
That is all there is to it. All you have now is a single-instance of
the BrowserDriver class that can be used from any place in your code.
Now, if you really need a singleton-enforcing class, one that upon
trying to create any instance beyond the first will silently do not
raise this attempt as an error, and just return the first instance ever created,
then the code you need in then __new__ method of the class is like the code
you were trying to use as the metaclass´ __new__. It records the sinvgle instance in the class itself:
if really needed: singleton enforcing-class using __new__:
class SingletonBase:
def __new__(cls, *args, **kw):
if "_instance" not in cls.__dict__:
cls._instance = super().__new__(cls, *args, **kw)
return cls._instance
And then just inherit your "I must be a singleton" classes from this base.
Note however, that __init__ will be called on the single-instance at each instantiation attempt - so, these singletons should use __new__ (and call super() as appropriate, instead of having an __init__ method, or have an idempotent __init__ (i.e. it can be called more than once, but this extra call have no effects)
I am trying to figure out how to have a child class reside in another module. Currently it is more convenient for me to store the parent and child classes in different modules due to their size. I need the super method, since I want to inherit not just all the functions, but the variables in self as well. My current solution is as follows:
Parent Module (parent.py):
class A:
def __init__(self, *args, **kwargs):
super(A, self).__init__(*args, **kwargs)
Child Module(child.py):
from parent import A
class B(A):
def __init__(self, *args, **kwargs):
super(B, self).__init__(*args, **kwargs)
B()
When I run the child module I get the following error.
TypeError: super(type, obj): obj must be an instance or subtype of type
I understand that this is due to the module reloading and thus causing data to be lost, however I am not sure if there is a workaround.
First, on your code:
It's not necessary to always call the parent constructor, in particular calling object's constructor as you do in parent.A is not needed
In Python 3, you can use the much simpler super().__init__ form for the call for single-inheritance
The import should usually be relative: from .parent import A
Now, to your actual problem:
When you reload parent in this case, you essentially generate a new class object for A that is not identical to the one that your compiled B knows of. You can check this by comparing id(B.__base__) to id(A) after the reload. This is not a problem for the super() form, as that doesn't use the name A explicitly (which points to the new class) but instead uses the actual base class. So it will construct fine, but with the "old" A implementation.
P.S.:
It is essential that your question includes information on what you are actually trying to do, in this case reloading a module, which is not a "standard" operation in Python (that's why it's so cumbersome to do).
I have the following base class and subclass:
class Event:
def __init__(self, sr1=None, foobar=None):
self.sr1 = sr1
self.foobar = foobar
self.state = STATE_NON_EVENT
# Event class wrappers to provide syntatic sugar
class TypeTwoEvent(Event):
def __init__(self, level=None):
self.sr1 = level
self.state = STATE_EVENT_TWO
Further on in my code, I am inspecting an instance of a TypeTwoEvent class, checking for a field I know exists in the base class - I expected it to be defaulted to value None. However, my code raises the following exception:
AttributeError: 'TypeTwoEvent' object has no attribute 'foobar'
I was under the impression that the base class fields would be inherited by the subclass and that creating an instance of a subclass will instantiate the base class (and thus invoke its constructor) ...
What am I missing here? Why does TypeTwoEvent not have a foobar attribute - when the base class from which it is derived has a foobar attribute?
Your subclass should be:
class TypeTwoEvent(Event):
def __init__(self, level=None, *args, **kwargs):
super().__init__(*args, **kwargs)
self.sr1 = level
self.state = STATE_EVENT_TWO
Because you override the __init__ method, so you need to call the parent method if you want the parent behavior to happen.
Remember, __init__ is not a special method dispite its strange name. It's just the method automatically called after the object is created. Otherwise it's an ordinary method, and ordinary inheritance rules apply.
super().__init__(arguments, that, goes, to, parents)
is the syntax to call the parent version of the method.
For *args and **kwargs, it just ensures we catch all additional arguments passed to __init__ and pass it to the parent method, as you child method signature didn't do it and the parent need these arguments to work.
You're overriding the constructor (__init__) of the parent class. To extend it, you need to explicitly call the constructor of the parent with a super() call.
class TypeTwoEvent(Event):
def __init__(self, level=None, **kwargs):
# the super call to set the attributes in the parent class
super().__init__(**kwargs)
# now, extend other attributes
self.sr1 = level
self.state = STATE_EVENT_TWO
Note that the super call is not always at the top of the __init__ method in your sub-class. Its location depends on your situation and logic.
When the instance is created, its __init__ method is called. In this case, that is TypeTwoEvent.__init__. Superclass methods will not be called automatically because that would be immensely confusing.
You should call Event.__init__(self, ...) from TypeTwoEvent.__init__ (or use super, but if you're not familiar with it, read up on it first so you know what you're doing).
You need to call the __init__ method of the base class from the __init__ method of the inherited class.
See here for how to do this.
I've had the same problem, but in my case I put super().__init__() on the bottom of my derived class and that's why it doesn't work. Because I tried to use attributes that are not initialized.
I'm trying to wrap my head around how to utilize inheritance in some code I'm writing for an API. I have the following parent class which holds a bunch of common variables that I'd like to instantiate once, and inherit with other classes to make my code look cleaner:
class ApiCommon(object):
def __init__(self, _apikey, _serviceid=None, _vclversion=None,
_aclname=None, _aclid=None):
self.BaseApiUrl = "https://api.fastly.com"
self.APIKey = _apikey
self.headers = {'Fastly-Key': self.APIKey}
self.ServiceID = _serviceid
self.VCLVersion = _vclversion
self.ACLName = _aclname
self.ACLid = _aclid
self.Data = None
self.IP = None
self.CIDR = None
self.fullurl = None
self.r = None
self.jsonresp = None
self.ACLcomment = None
self.ACLentryid = None
And I am inheriting it in another class below, like so in a lib file called lib/security.py:
from apicommon import ApiCommon
class EdgeAclControl(ApiCommon):
def __init__(self):
super(EdgeAclControl, self).__init__()
...
def somemethodhere(self):
return 'stuff'
When I instantiate an object for ApiCommon(object), I can't access the methods in EdgeAclControl(ApiCommon). Example of what I'm trying which isn't working:
from lib import security
gza = security.ApiCommon(_aclname='pytest', _apikey='mykey',
_serviceid='stuffhere', _vclversion=5)
gza.somemethodhere()
How would I instantiate ApiCommon and have access to the methods in EdgeAclControl?
Your current code appears to be trying to use inheritance backwards. When you create an instance of ApiCommon, it will only get the methods defined in that base class. If you want to get methods from a subclass, you need to create an instance of the subclass instead.
So the first fix you need to make is to change gza = security.ApiCommon(...) to gza = EdgeAclControl(...) (though depending on how you're doing your imports, you might need to prefix the class name with a module).
The second issue is that your EdgeAclControl class doesn't take the arguments that its base class needs. Your current code doesn't pass any arguments to super(...).__init__, which doesn't work since the _apikey parameter is required. You could repeat all the arguments again in the subclass, but a lot of the time it's easier to use variable-argument syntax instead.
I suggest that you change EdgeAclControl.__init__ to accept *args and/or **kwargs and pass on those variable arguments when it calls its parent's __init__ method using super. That would look like this:
def __init__(self, *args, **kwargs):
super(EdgeAclControl, self).__init__(*args, **kwargs)
Note that if, as in this example, you're not doing anything other than calling the parent __init__ method in the derived __init__ method, you could get the same effect by just deleting the derived version entirely!
It's likely that your real code does something in EdgeAclControl.__init__, so you may need to keep it in some form. Note that it can take arguments normally in addition to the *args and **kwargs. Just remember to pass on the extra arguments, if necessary, when calling the base class.
May I ask why you have to instantiate an ApiCommon object? I don't see any point of doing so.
If you insist doing that, you have to add methods in superclass and then subclass may override theses methods. But you still couldn't access methods of EdgeAclControl from ApiCommon object
I was reading 'Dive Into Python' and in the chapter on classes it gives this example:
class FileInfo(UserDict):
"store file metadata"
def __init__(self, filename=None):
UserDict.__init__(self)
self["name"] = filename
The author then says that if you want to override the __init__ method, you must explicitly call the parent __init__ with the correct parameters.
What if that FileInfo class had more than one ancestor class?
Do I have to explicitly call all of the ancestor classes' __init__ methods?
Also, do I have to do this to any other method I want to override?
The book is a bit dated with respect to subclass-superclass calling. It's also a little dated with respect to subclassing built-in classes.
It looks like this nowadays:
class FileInfo(dict):
"""store file metadata"""
def __init__(self, filename=None):
super(FileInfo, self).__init__()
self["name"] = filename
Note the following:
We can directly subclass built-in classes, like dict, list, tuple, etc.
The super function handles tracking down this class's superclasses and calling functions in them appropriately.
In each class that you need to inherit from, you can run a loop of each class that needs init'd upon initiation of the child class...an example that can copied might be better understood...
class Female_Grandparent:
def __init__(self):
self.grandma_name = 'Grandma'
class Male_Grandparent:
def __init__(self):
self.grandpa_name = 'Grandpa'
class Parent(Female_Grandparent, Male_Grandparent):
def __init__(self):
Female_Grandparent.__init__(self)
Male_Grandparent.__init__(self)
self.parent_name = 'Parent Class'
class Child(Parent):
def __init__(self):
Parent.__init__(self)
#---------------------------------------------------------------------------------------#
for cls in Parent.__bases__: # This block grabs the classes of the child
cls.__init__(self) # class (which is named 'Parent' in this case),
# and iterates through them, initiating each one.
# The result is that each parent, of each child,
# is automatically handled upon initiation of the
# dependent class. WOOT WOOT! :D
#---------------------------------------------------------------------------------------#
g = Female_Grandparent()
print g.grandma_name
p = Parent()
print p.grandma_name
child = Child()
print child.grandma_name
You don't really have to call the __init__ methods of the base class(es), but you usually want to do it because the base classes will do some important initializations there that are needed for rest of the classes methods to work.
For other methods it depends on your intentions. If you just want to add something to the base classes behavior you will want to call the base classes method additionally to your own code. If you want to fundamentally change the behavior, you might not call the base class' method and implement all the functionality directly in the derived class.
If the FileInfo class has more than one ancestor class then you should definitely call all of their __init__() functions. You should also do the same for the __del__() function, which is a destructor.
Yes, you must call __init__ for each parent class. The same goes for functions, if you are overriding a function that exists in both parents.