Object oriented programming - Overriding a base class - python

What is method overriding? What is the exact need or use of method overriding? An example with a python code will be more useful.

class Car(object):
def shifting(self):
return "manual"
class AutoCar(Car):
def shifting(self):
return "automatic"
autos = [Car(), AutoCar()]
for auto in autos:
print(auto.shifting())
In this basic example AutoCar has overriden the method shifting of the base class.
Were you to receive a list with a mix of unknown Cars (Manual and Automatic) you can query the shifting method and find out.
Of course given Python is dynamically typed, the list could contain instances from other class types which also have a shifting method. But it's not the point.
Output:
manual
automatic

Related

How to dynamiclly call SOLID princles following classes

I have a module where I try to follow the SOLID principles to create and generate data and I think the following is based around the Liskov Substitution Principle:
class BaseLoader(ABC):
def __init__(self, dataset_name='mnist'):
self.dataset_name=dataset_name
class MNISTLoader(BaseLoader):
def load(self):
# Logic for loading the data
pass
class OCTMNISTLoader(Baseloader):
def download(self):
# Logic for downloading the data
pass
Now I want to create an instance based on a parsed argument or a loaded config file, I wonder if the following is the best practice or if better ways exist to create dynamically an instance:
possible_instances = {'mnist': MNISTLoader, 'octmnist': OCTMNISTLoader}
choosen_dataset = 'mnist'
instance = possible_instances[choosen_dataset](dataset_name=choosen_dataset)
EDIT #1:
We also thought about using a function to call the classes dynamically. This function is than placed inside the module, which includes the classes:
def get_loader(loader_name:str) -> BaseLoader:
loaders = {
'mnist': MNISTLoader,
'octmnist': OCTMNISTLoader
}
try:
return loaders[loader_name]
except KeyError as err:
raise CustomError("good error message")
I am still not shure which is the most pythonic way to solve this.
I wouldn't say this has much to do with LSP since both classes inherit only from an abstract base class that never gets instantiated. You are simply sharing a dataset_name member of the base class to reduce code duplication. And forget about the default value in argument dataset_name='mnist' it has no point the way you are using it.
I don't know much about Python, but to instantiate a class from a string you would usually want to have a factory class where you would use whatever ugly method you come up with to map a string to an instance of a matching class, perhaps a simple if/elif. The factory method could contain something like this.
if loader_name == "mnist":
return MNISTLoader(loader_name)
elif loader_name == "octmnist":
return OCTMNISTLoader(loader_name)
# but you probably don't need to pass loader_name argument
While the above is not an elegant solution, you now have a reusable class with a method that is the single source of truth for how strings should map to type.
Also, you may remove the dataset_name init argument and member, unless you have a reason for objects to hold that information. I guess you tried to use the base init method like a factory but that's not the way to go.
One problem your code has is the methods "load" and "download", how will you know which one of the two methods to call if you fetch instances dynamically? Again, if you named them differently because you thought this had something to do with LSP, it doesn't. You are losing the advantage of polymorphism by having two methods with different names. Just have both classes extend the same abstract method (since you're using the ABC thing) "load" and be done with it. Then you can do whatever_the_object_type_is.load()

difference between DataType and Class in python

what's the difference between DataType and Class ??
is a class allows to create a new datatype which we define on our own? like str, int, list ...
if not I want to know what are the built-in classes in python.
and finally, I want a simple definition of an object with some examples.
A type denotes a certain interface, i.e. a set of signatures,
whereas a class tells us how an object is implemented (like a
blueprint). A class can have many types if it implements all their
interfaces, and different classes can have the same type if they share a
common interface. More specifically -
A type is a class that can be used as a template for additional classes by way of inheritance. Example-
class Myclass(Test):
A class is a Python data structure that can be used as a template for instances of that class by calling it. Example -
Myclass= Myclass().
You can read bit more in theory here.

Python design pattern for magic methods and pandas inheritance

So I basically put this question for some advice
I have few classes which basically does some pandas operations and return a dataframe. But these dataframes needs addition and subtraction on some filter options. I planned to write a class to override __add__ and __sub__ methods, so that these dataframes are added or subtracted by my code which implements those filters. Below is a basic structure
import ArithOperation
class A:
def dataA(self, filenameA):
dfa = pd.read_excel(filenameA)
return ArithOperation(dfA)
class B:
def dataB(self, filenameB):
dfb = pd.read_excel(filenameB)
return ArithOperation(dfB)
dfA and dfB here are pandas dataframes.
class ArithOperation:
def __init__(self, df):
self.df = df
def __add__(self, other):
# here the filtering and manual addition is done
return ArithOperation(sumdf)
def __sub__(self, other):
# here the filtering and manual subtraction is done
return ArithOperation(deltadf)
Basically I do the calculation as below
dfa = A().dataA()
dfb = B().dataB()
sumdf = dfa+dfb
deltadf = dfa-dfb
But how do I make the sumdf and deltadf have default dataframe functions too. I know I should inherit dataframe class to ArithOperation but I am confused and bit uncomfortable of instantiating ArithOperation() at many places.
Is there a better design pattern for this problem?
Which class to inherit on ArithOperation so that I have all pandas dataframe functions too on ArithOperation object ?
It looks to me you want to customize the behaviour of an existent type (dataframes in this case, but it could be any type). Usually we want to alter some of the behaviour to a different one, or we want to extend that behaviour, or we want to do both. When I say behaviour think methods).
You can also choose between a has-a approach (that is wrapping an object, by creating a new class whose objects hold a reference to the original object. That way you can create several new, different or similar methods that make new things, eventually using some of the existing ones, by using the stored reference to invoke original methods. This way you kind of adapt the original class interface to a different one. This is known as a wrapper pattern (or adapter pattern).
That is what you have made. But then you face a problem: how do you accept all of the possible methods of the original class? You will have to rewrite all the possible methods (not pratical), just to delegate them on the original class, or you find a way of delegating them all except the few ones you override. I will not cover this last possibility, because you have inheritance at your disposal and that makes things like that quite straightforward.
Just inherit from the original class and you'll be able to create objects with the same behaviour as the parent class. If you need new data members, override __init__ and add those but don't forget to invoke the parent's class __init__, otherwise the object won't initialize properly. That's where you use super().__init__() like described below. If you want to add new methods, just add the usual defs to your child class. If you want to extend existant methods, do as described for the __init__ method. If you want to completely override a method just write your version with a def with the same name of the original.method, it will totally replace the original.
So in your case you want to inherit from dataframe class, you'll have to either choose if you need a custom __init__ or not. If you need define one but do not forget to call the parent's original inside it. Then define your custom methods, say a new __add__ and __sub__, that either replace or extend the original ones (with the same technique).
Be careful in not defining methods that you think are new, but actually existed in the original class, cause they will be overriden. This is an small inconvenience of inheriting, specially if the original has an interface with lots of methods.
Use super() to extend a parent's class behaviour
class A:
pass # has some original __init__
class B(A): # B inherits A
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs) # preserves parent initialization, passing the same received arguments. If you don't do this parent __init__ will be overrided, not initialising parent object properly.
# add your custom code
pass
The thing with super() is that it solves some problems that arise when using multiple inheritance (inherit from several classes), when determining which of the parents method should be called, if you have several with the same named method; it is recommended over calling the parent's method with SomeParentClass.method() inside the child class.
Also if you subclass a type because you need it, you shouldn't be afraid of using it "everywhere" as you said. Make sure you customize it for a good reason (so think if making this to dataframes is appropriate, or is there another simpler way of achieving the same objective without doing that; can't advise you here, I haven't experience with pandas), then use it instead of the original class.

Placing custom class object in a list

I'm fairly new to object oriented programming so some of the abstraction ideas are a little blurry to me. I'm writing an interpreter for an old game language. Part of this has made me need to implement custom types from said language and place them on a stack to be manipulated as needed.
Now, I can put a string on a list. I can put a number on a list, and I've even found I can put symbols on a list. But I'm a bit fuzzy on how I would put a custom object instance on a list when I can't just drop it into a variable (since, after all, I don't know how many there will be and can't go about defining them by hand while the code is running :)
I've made a class for one of the simplest data types-- a DBREF. The DBREF just contains a Database reference number. I can't just use an integer, string, dictionary, etc, because there are type-checking mechanisms in the language I have to implement and that would confuse matters, since those are already used elsewhere in their closes analogues.
Here is my code and my reasoning behind it:
class dbref:
dbnumber=0
def __init__(self, number):
global number
dbnumber=number
def getdbref:
global number
return number
I create a class named dbref. All it does (for now) is take a number and store it in a variable. My hope is that if I were to do:
examplelist=[ dbref(5) ]
That the dbref object would be on the stack. Is that possible? Further, will I be able to do:
if typeof(examplelist[0]) is dbref:
print "It's a DBREF."
else:
print "Nope."
...or am I misunderstanding how Python classes work? Also, is my class definition wonky in any way?
If you used...
class dbref:
dbnumber=0
that would share the same number among all instances of the class, because dbnumber would be a class attribute, rather than an instance attribute. Try this instead:
class dbref(object):
def __init__(self, number):
self.dbnumber = number
def getdbref(self):
return self.dbnumber
self is a reference to the object instance itself that's automatically passed by Python when you call one of the instance's methods.

Is there a standard function to iterate over base classes?

I would like to be able to iterate over all the base classes, both direct and indirect, of a given class, including the class itself. This is useful in the case where you have a metaclass that examines an internal Options class of all its bases.
To do this, I wrote the following:
def bases(cls):
yield cls
for direct_base in cls.__bases__:
for base in bases(direct_base):
yield base
Is there a standard function to do this for me?
There is a method that can return them all, in Method Resolution Order (MRO): inspect.getmro. See here:
http://docs.python.org/library/inspect.html#inspect.getmro
It returns them as a tuple, which you can then iterate over in a single loop yourself:
import inspect
for base_class in inspect.getmro(cls):
# do something
This has the side benefit of only yielding each base class once, even if you have diamond-patterned inheritance.
Amber has the right answer for the real world, but I'll show one correct way to do this. Your solution will include some classes twice if two base classes themselves inherit from the same base class.
def bases(cls):
classes = [cls]
i = 0
while 1:
try:
cls = classes[i]
except IndexError:
return classes
i += 1
classes[i:i] = [base for base in cls.__bases__ if base not in classes]
The only slightly tricky part is where we use the slice. That's necessary to perform this sort of depth first search without using recursion. All it does is take the base classes of the class currently being examined and insert them immediately after it so that the first base class is the next class examined. A very readable solution (that has it's own ugliness) is available in the implementation of inspect.getmro in the standard library.
I don't know exactly if it is what you're looking for, but have a look at someclass.__mro__, mro being Method Resolution Order
http://docs.python.org/library/stdtypes.html?highlight=mro#class.__mro__

Categories

Resources