I'm using the DroneKit API in Python for controlling a drone using a companion computer. I'm trying to create a class, Vehicle, which inherits from the Vehicle class in DroneKit. The purpose of this class is for me to override some methods present in DroneKit that don't work with PX4 as well as adding a few methods of my own, whilst still having access to all of the methods available by default.
The issue is that you don't create a Vehicle object directly using Dronekit – you call the connect() function which return a Vehicle object.
My question is, how do I create an instance of my class?
The accepted method seems to be to call the parent init(), like so:
class Vehicle(dronekit_Vehicle):
def __init__(self, stuff):
dronekit_Vehicle.__init__(stuff)
But like I said, you don't create a Vehicle object directly in Dronekit, e.g. vehicle = Vehicle(stuff), but by vehicle = connect(stuff), which eventually returns a Vehicle object but also does a bunch of other stuff.
The only way I can think of is
class Vehicle(dronekit_Vehicle):
def __init__(self, stuff):
self.vehicle = connect(stuff)
And then having to use self.vehicle.function() to access the default DroneKit commands and attributes, which is a huge pain.
How do I make this work?
The way objects are defined has nothing to do with connect. Calling connect is merely some convenience function that wraps some logic around the object creation:
def connect(...):
handler = MAVConnection(...)
return Vehicle(handler)
with Vehicle.__init__() being defined as
def __init__(self, handler):
super(Vehicle, self).__init__()
self._handler = handler
...
So as long as you pass on the handler in your constructor:
class MyVehicle(dronekit.Vehicle):
def __init__(self, handler):
super(MyVehicle, self).__init__(handler)
Your class will work with connect():
connect(..., vehicle_class=MyVehicle)
Related
I have a Parent and a Child class, both should execute their own fct in init but Child have to execute first the Parent fct :
class Parent(object):
def __init__(self):
self.fct()
def fct(self):
# do some basic stuff
class Child(Parent):
def __init__(self):
super().__init__()
self.fct()
def fct(self):
# add other stuff
Problem is that super().init() calls the Child fct and not the Parent one as I would like. Of course I could rename Child function as fct2 but I was wondering if I can do what I want to do without changing names (because fct and fct2 do the same thing conceptually speaking, they just apply on different things). It would be nice if I could call super().__init() as if were Parent object.
The idea of subclassining is this: if you ever need to use a method to the parent class, just do not create it in the child class.
Otherwise, in a hierarchy with complicated classes and mixins, and you really need the methods to have the same name, there is the name mangling mechanism, triggered by Python when using two underlines __ as a method or attribute prefix:
class Parent(object):
def __init__(self):
self.__fct()
def __fct(self):
# do some basic stuff
class Child(Parent):
def __init__(self):
super().__init__()
self.__fct()
def __fct(self):
# add other stuff
Using the __ prefix makes Python change the name of this method, both in declaration and where it is used when the class is created (at the time the class statement with its block is itself executed) - and both methods work as if they were named differently, each one only accessible, in an ordinary way, from code in its own class.
Some documentation, mainly older docs, will sometimes refer to this as the mechanism in Python to create "private methods". It is not the samething, although it can serve the same purpose in some use cases (like this). The __fct method above will be renamed respectively to Parent._Parent__fct and Child._Child__fct when the code is executed.
second way, without name mangling:
Without resorting to this name mangling mechanism, it is possible to retrieve attributes from the class where a piece of code is declared by using the __class__ special name (not self.__class__, just __class__) - it is part of the same mechanism Python uses to make argumentless super() work:
class Parent(object):
def __init__(self):
__class__.fct(self) # <- retrieved from the current class (Parent)
def fct(self):
# do some basic stuff
class Child(Parent):
def __init__(self):
super().__init__()
__class__.fct(self)
def fct(self):
# add other stuff
This will also work - just note that as the methods are retrieved from the class object, not from an instance, the instance have to be explicitly passed as an argument when calling the methods.
The name __class__ is inserted automatically in any methods that use it, and will always refer to the class that has the body were it appears - the class itself will be created "in the future", when all the class body is processed and the class command itself is resolved.
I am trying to create a client class that is able to be instantiated with connection information and other attributes about how the client is connecting and interacting with a service. The client class would have inner classes that represent objects in the service. These objects could be instantiated in a couple of different ways:
The outer, client class has a factory method that creates the inner class, like client.make_someobject() - this would pass an instance of itself to the new object, so the object would know about and can use the connection information without the caller explicitly passing the connection in.
An existing object in the service can be pulled in by writing client.SomeObject(some_id)
My question is mostly related to the second scenario. When creating an instance of an inner class directly, without a factory method that can just pass in self, how could I ensure that the new instance of the inner class knows about the attributes of the outer, client class?
Illustrative example:
class Client():
def __init__(self, client_attr):
self.client_attr = client_attr
def make_serviceobject(self):
return ServiceObject._make_serviceobject(self)
class ServiceObject():
def __init__(self, id,client=None):
self.id = id
if client:
self.client = client
# ...
#classmethod
def _make_serviceobject(cls, client):
id = 'some_id'
return cls(id, client=client)
my_client = Client(some_attr)
# now, how can this new ServiceObject know about the my_client attributes and methods?
my_existing_resource = my_client.ServiceObject(some_id)
# I am trying to avoid this:
my_existing_resource = my_client.ServiceObject(some_id, client=my_client)
You're pretty close, but there are a few issues here:
Class methods (where you expect a class instance to be passed as the first parameter, conventionally named cls, rather than an object instance conventionally named self) need to be decorated with #classmethod. However, there isn't really a reason to set up a hidden class method within the inner class; since it's logic that only gets used by the outer class, and is accessible (via the Client.make_serviceobject interface) from the outer class, it logically belongs in the outer class. (There also isn't a reason to use classmethod anyway - as opposed to staticmethod - because there is no hope of this working polymorphically - the inner class is specific to this outer class.)
my_client.ServiceObject (that is, the class name of the inner class, looked up via an outer class instance) just gets you that class itself, rather than anything associated with or bound to the outer class instance.
It makes no sense to offer a default None for the inner class' client (i.e., containing instance), because a) there will never be a None value (we only intend to create the instance from the outer class) and b) the outer class instance is presumably necessary for the inner class' functionality (otherwise, why do any of this setup work at all rather than just having two separate classes?)
You apparently want to supply the ID from the calling code, so the internal code can't just make one up.
To fix these problems, we simply:
Make the outer class' interface do the work needed to create an instance of the inner class, and have it accept a parameter for the ID.
Have it do so directly, via the constructor.
Have calling code use that interface.
I would also mark the inner class' name to indicate that it isn't intended to be dealt with directly.
It looks like:
class Client():
def __init__(self, client_attr):
self.client_attr = client_attr
def make_serviceobject(self, id):
# any logic necessary to compute the ID goes here.
return _ServiceObject(id, self)
class _ServiceObject():
def __init__(self, id, client):
self.id = id
self.client = client
# ...
my_client = Client(some_attr)
my_existing_resource = my_client.make_serviceobject('some_id')
# assert my_existing_resource.client == my_client
in the below example I want to know when I should use one of them for inherits? I think both are valid so, why sometimes I have to use super if the other way is workable to work with?
class User:
def __init__(self):
self._user = "User A"
pass
class UserA(User):
_user = "User B"
def __init__(self):
super().__init__()
class UserB(User):
pass
You are correct, both are valid. The difference is:
UserA: you are overwriting the __init__ method of the ancestor. This is practical if you want to add something during the initialization process. However, you still want to initialize the ancestor, and this can be done via super().__init__(), despite having overwritten the __init__ method.
UserB: you are fully using the __init__ of the ancestor you are inheriting from (by not overwriting the __init__ method). This can be used if nothing extra needs to be done during initialization.
The super() builtin returns a proxy object (temporary object of the superclass) that allows us to access methods of the base class. For example:
class Mammal(object):
def __init__(self, mammalName):
print(mammalName, 'is a warm-blooded animal.')
class Dog(Mammal):
def __init__(self):
print('Dog has four legs.')
super().__init__('Dog')
self represents the instance of the class. By using the “self” keyword we can access the attributes and methods of the class in python
I am working with the Python canmatrix library (well, presently my Python3 fork) which provides a set of classes for an in-memory description of CAN network messages as well as scripts for importing and exporting to and from on-disk representations (various standard CAN description file formats).
I am writing a PyQt application using the canmatrix library and would like to add some minor additional functionality to the bottom level Signal class. Note that a CanMatrix organizes it's member Frames which in turn organize it's member Signals. The whole structure is created by an import script which reads a file. I would like to retain the import script and sub-member finder functions of each layer but add an extra 'value' member to the Signal class as well as getters/setters that can trigger Qt signals (not related to the canmatrix Signal objects).
It seems that standard inheritance approaches would require me to subclass every class in the library and override every function which creates the library Signal to use mine instead. Ditto for the import functions. This just seems horribly excessive to add non-intrusive functionality to a library.
I have tried inheriting and replacing the library class with my inherited one (with and without the pass-through constructor) but the import still creates library classes, not mine. I forget if I copied this from this other answer or not, but it's the same structure as referenced there.
class Signal(QObject, canmatrix.Signal):
_my_signal = pyqtSignal(int)
def __init__(self, *args, **kwargs):
canmatrix.Signal.__init__(self, *args, **kwargs)
# TODO: what about QObject
print('boo')
def connect(self, target):
self._my_signal.connect(target)
def set_value(self, value):
self._my_value = value
self._my_signal.emit(value)
canmatrix.Signal = Signal
print('overwritten')
Is there a direct error in my attempt here?
Am I doing this all wrong and need to go find some (other) design pattern?
My next attempt involved shadowing each instance of the library class. For any instance of the library class that I want to add the functionality to I must construct one of my objects which will associate itself with the library-class object. Then, with an extra layer, I can get from either object to the other.
class Signal(QObject):
_my_signal = pyqtSignal(int)
def __init__(self, signal):
signal.signal = self
self.signal = signal
# TODO: what about QObject parameters
QObject.__init__(self)
self.value = None
def connect(self, target):
self._my_signal.connect(target)
def set_value(self, value):
self.value = value
self._my_signal.emit(value)
The extra layer is annoying (library_signal.signal.set_value() rather than library_signal.set_value()) and the mutual references seem like they may keep both objects from ever getting cleaned up.
This does run and function, but I suspect there's still a better way.
I have a python fuse project based on the Xmp example in the fuse documentation. I have included a small piece of the code to show how this works. For some reason get_file does get called and the class gets created, but instead of fuse calling .read() on the class from get_file (file_class) fuse keeps calling Dstorage.read() which defeats the purpose in moving the read function out of that class.
class Dstorage(Fuse, Distributor):
def get_file(self, server, path, flags, *mode):
pass
# This does some work and passes back an instance of
# a class very similar to XmpFile
def main(self, *a, **kw):
self.file_class = self.get_file
return Fuse.main(self, *a, **kw)
I have my code hosted on launchpad, you can download it with this command.
bzr co https://code.launchpad.net/~asa-ayers/+junk/dstorage
bzr branch lp:~asa-ayers/dstorage/trunk
solution:
I used a proxy class that subclasses the one I needed and in the constructor I get the instance of the class I need and overwrite all of the proxy's methods to simply call the instance methods.
Looking at the code of the Fuse class (which is a maze of twisty little passages creating method proxies), I see this bit (which is a closure used to create a setter inside Fuse.MethodProxy._add_class_type, line 865):
def setter(self, xcls):
setattr(self, type + '_class', xcls)
for m in inits:
self.mdic[m] = xcls
for m in proxied:
if hasattr(xcls, m):
self.mdic[m] = self.proxyclass(m)
When you do self.file_class = self.get_file, this gets called with self.get_file, which is a bound method. The loop over proxied attributes is expecting to be able to get the attributes off the class you set, to put them into its mdic proxy dictionary after wrapping them, but they aren't there, because it's a bound method, rather than a class. Since it can't find them, it reverts to calling them on Dstorage.
So, long story short, you can't use a callable that returns an instance (kind of a pseudo-class) instead of a class here, because Fuse is introspecting the object that you set to find the methods it should call.
You need to assign a class to file_class - if you need to refer back to the parent instance, you can use the nested class trick they show in the docs.