Constructor Overloading in Python - python

I am reading through Python and came across various ways to somehow perform overloading in Python(most of them suggested use of #classmethod). But I am trying to do something like this as shown in below code. I have kept all the parameters required in the init method itself. What all possible problems may arise from my choice of overloading?
class Vehicle(object):
def __init__(self, wheels=None, engine=None, model=None):
print("A vehicle is created")
self.w = wheels
self.e = engine
self.m = model
Now I can create any number of Vehicle objects with different parameters each time. I can say something like:
v = Vehicle(engine=2, wheels='Petrol')
v2 = Vehicle(4, 'Diesel', 'Honda')
or even
v3 = Vehicle()
And later I can say something like v3.w = 10 #for truck and it still works.
So my question is: Is it correct way of overloading apart from #classmethod? What difficulties can I run in later down the path if I use this kind of code?

I just went though this same problem and looking into the documentation on Python 3.6 #classmethod is a decorator that is actually short hand for some deeper programming concepts. For anyone like me whose just trying to unpack what python is doing here, in C# or Java I would explain #classmethod as a function that creates a delegate typed to a class, points the delegate at such a classes constructor/method, returns that constructor/method, and allows the returned constructor/method to be used in whatever you define below #classmethod. So essentially, #classmethod is really a syntactical shortcut that does a lot of things.
What OP is doing here is using this syntactic shortcut to create a "factory" which is a very common way of creating instances in many different languages.
I do think its important however to realize that unlike other simple things that you might do in python, there is a lot going on under the hood here. While it's not wrong, it might be more efficient to create a simple factory depending on what you want to get out of it.
If you don't have a back ground in any other languages, I could try to simplify the answer by saying that #classmethod it returns a function to the function that you define below it.
Here's the documentation on Python 3.6. Scroll down to "decorators" to see what it says.
https://docs.python.org/3/glossary.html#term-decorator

Related

Selecting executed method of class at runtime in python?

This question is very generic but I don't think it is opinion based. It is about software design and the example prototype is in python:
I am writing a program which goal it is to simulate some behaviour (doesn't matter). The data on which the simulation works is fixed, but the simulated behaviour I want to change at every startup time. The simulation behaviour can't be changed at runtime.
Example:
Simulation behaviour is defined like:
usedMethod = static
The program than looks something like this:
while(true)
result = static(object) # static is the method specified in the behaviour
# do something with result
The question is, how is the best way to deal with exchangeable defined functions? So another run of the simulation could look like this
while(true)
result = dynamic(object)
if dynamic is specified as usedMethod. The first thing that came in my mind was an if-else block, where I ask, which is the used method and then execute this on. This solution would not be very good, because every time I add new behaviour I have to change the if-else block and the if-else block itself would maybe cost performance, which is important, too. The simulations should be fast.
So a solution I could think of was using a function pointer (output and input of all usedMethods should be well defined and so it should not be a problem). Then I initalize the function pointer at startup, where the used method is defined.
The problem I currently have, that the used method is not a function per-se, but is a method of a class, which depends heavily on the intern members of this class, so the code is more looking like this:
balance = BalancerClass()
while(true)
result = balance.static(object)
...
balance.doSomething(input)
So my question is, what is a good solution to deal with this problem?
I thought about inheriting from the balancerClass (this would then be an abstract class, I don't know if this conecpt exists in python) and add a derived class for every used method. Then I create the correct derived object which is specified in the simulation behaviour an run-time.
In my eyes, this is a good solution, because it encapsulates the methods from the base class itself. And every used method is managed by its own class, so it can add new internal behaviour if needed.
Furthermore the doSomething method shouldn't change, so therefore it is implemented the base class, but depends on the intern changed members of the derived class.
I don't know in general if this software design is good to solve my problem or if I am missing a very basic and easy concept.
If you have a another/better solution please tell me and it would be good, if you provide the advantages/disadvantages. Also could you tell me advantages/disadvantages of my solution, which I didn't think of?
Hey I can be wrong but what you are looking for boils down to either dependency injection or strategy design pattern both of which solve the problem of executing dynamic code at runtime via a common interface without worrying about the actual implementations. There are also much simpler ways just like u desrcibed creating an abstract class(Interface) and having all the classes implement this interface.
I am giving brief examples fo which here for your reference:
Dependecy Injection(From wikipedia):
In software engineering, dependency injection is a technique whereby one object supplies the dependencies of another object. A "dependency" is an object that can be used, for example as a service. Instead of a client specifying which service it will use, something tells the client what service to use. The "injection" refers to the passing of a dependency (a service) into the object (a client) that would use it. The service is made part of the client's state.
Passing the service to the client, rather than allowing a client to build or find the service, is the fundamental requirement of the pattern.
Python does not have such a conecpt inbuilt in the language itself but there are packages out there that implements this pattern.
Here is a nice article about this in python(All credits to the original author):
Dependency Injection in Python
Strategy Pattern: This is an anti-pattern to inheritance and is an example of composition which basically means instead of inheriting from a base class we pass the required class's object to the constructor of classes we want to have the functionality in. For example:
Suppose you want to have a common add() operation but it can be implemented in different ways(add two numbers or add two strings)
Class XYZ():
def __constructor__(adder):
self.adder = adder
The only condition being all adders passed to the XYZ class should have a common Interface.
Here is a more detailed example:
Strategy Pattern in Python
Interfaces:
Interfaces are the simplest, they define a set of common attributes and methods(with or without a default implementation). Any class then can implement an interface with its own functionality or some shared common functionality. In python Interfaces are implemented via abc package.

Why choose module level function over #staticmethod in Python (according to Google Style Guide)?

According to Google Python Style Guide, static methods should (almost) never be used:
Never use #staticmethod unless forced to in order to integrate with an
API defined in an existing library. Write a module level function
instead
What are the reasons behind such recommendation?
Is this particular to Google only or are there any other (more general) downsides with using static methods in Python?
Especially, what is the best practice if I want to implement an utility function inside of a class that will be called by other public member functions of that class?
class Foo:
.......
def member_func(self):
some_utility_function(self.member)
google python style guide
How to understand the Google Python Style Guide that says:
Never use #staticmethod unless forced to in order to integrate with an API defined in an existing library. Write a module level function instead
Well, you should understand it as Google's style guide. If you're writing Python code for Google, or contributing to a project that conforms to that style guide, or have chosen to use it for a project of your own, the answer is pretty simple: Don't use #staticmethod except when forced to by an API.
This means there are no judgment-call cases: A utility function inside of a class is not forced to be a #staticmethod by an API, so it should not be a #staticmethod.
The same is true for some other common1 reasons for #staticmethod. If you want a default value for an instance attribute that's meant to hold a callback function… too bad, find another way to write it (e.g., a local function defined inside __init__). If you want something that looks like a #classmethod but explicitly doesn't covary with subclasses… too bad, it just can't look like a #classmethod.
Of course if you're not following Google's style guide, then you should understand it as just one opinion among many. Plenty of Python developers aren't quite as hard against #staticmethod as that guide is. Of course Google is a pretty prominent developer of lots of Python code. On the other hand, Google's style guide was written while imported Java-isms were more of a problem than today.2 But you probably don't want to think too much about how much weight to give each opinion; instead, when it's important, learn the issues and come up with your own opinion.
As for your specific example, as I said in a comment: the fact that you naturally find yourself writing some_utility_function(self.member) instead of self.some_utility_function(self.member) or Foo.some_utility_function(self.member) means that intuitively, you're already thinking of it as a function, not a #staticmethod. In which case you should definitely write that one as a function, not a #staticmethod.
That may be just the opinion of one guy on the internet, but I think most Python developers would agree in this case. It's the times when you do naturally find yourself prefixing self. before every call when there's a judgment call to make.
1. Well, not exactly common. But they aren't so rare that they never come up. And they were common enough that, when there was discussion about deprecating #staticmethod for Python 3, someone quickly came up with these two cases, with examples from the standard library, and that was enough for Guido to kill the discussion.
2. In Java, there are no module-level functions, and you're forced to write static methods to simulate them. And there were a few years where most university CS programs were focused on Java, and a ton of software was written by Java, so tons of people were writing Python classes with way too many #staticmethods (and getters and setters, and other Java-isms).
The way you've written the call to some_utility_function(), it isn't defined on the class anyway. If it were, you would be using self.some_utility_function() or possibly Foo.some_utility_function() to call it. So you've already done it the way the style guide recommends.
The #classmethod and #staticmethod decorators are used primarily to tell Python what to pass as the first argument to the method in place of the usual self: either the type, or nothing at all. But if you're using #staticmethod, and need neither the instance nor its type, should it really be a member of the class at all? That's what they're asking you to consider here: should utility functions be methods of a class, when they are not actually tied to that class in any way? Google says no.
But this is just Google's style guide. They have decided that they want their programmers to prefer module-level functions. Their word is not law. Obviously the designers of Python saw a use for #staticmethod or they wouldn't have implemented it! If you can make a case for having a utility function attached to a class, feel free to use it.
My 2¢
The point is that when you want to do duck-typing polymorphic things, defining module level functions is overkilled, especially if your definitions are very short. E.g. defining
class StaticClassA:
#staticmethod
def maker(i: int) -> int:
return 2*i
class StaticClassB:
#staticmethod
def maker(i: int) -> float:
return pow(i, 2)
#[...] say, 23 other classes definitions
class StaticClassZ:
#staticmethod
def maker(i: int) -> float:
return 2*pow(i, 2)
Is clearly smarter than having 26 (from A to Z) classes defined within 26 modules.
A practical example of what I imply with the word "polymorphism" ? With the above classes definitions, you can do
for class_ in [StaticClassA, StaticClassB, StaticClassZ]:
print(class_.maker(6))

Explaining inheritance in Python

I was going through some Python code and found this code snippet
class A(object):
...
def add_commands(self, cmd):
self.commands.append(cmd)
...
class B(A):
...
def __init__(self):
self.commands = []
...
Now, because of inheritance, B will have access to the method 'add_commands'. What surprises me is that even though class A does not know about the list 'commands' this program compiles just fine forget the method execution on the object B which also turns out to be fine. The only time it errors out is when we create an object of the class A and call the method 'add_commands'. I understand that its the 'self' keyword which is saving us here. This would not be the case in a programming language like C++ as the compilation itself fails.
This brings me to my question - How should one approach inheritance in a programming language like Python? Considering the above example, is that the right way to design a class in Python?
I think that the question of how to approach inheritance in Python is too broad to be answered in a single post (I'd start checking the official docs). What I would like to add is the following:
Don´t apply the exact same design principles of inheritance you would apply on a compiled language (like C++ or Java) to Python. One of the biggest strengths of Python is its flexibility (which is a dangerous, but extremely powerful tool if used correctly).
Python is not reporting any errors in the code you posted above because it is not wrong from a Python point of view (and obviously because Python is interpreted and not compiled). The pythonic way to do things is not usually what is recommended (or even possible) in other languages.
You may find yourself in a situation (as has been pointed above in the comments) where you want to define attributes in a child to be accessed from its parent methods. I tried to come up with an example with the hope that it'll be helpful.
Imagine you have two classes representing two different resources (databases, files, etc) FirstResource and SecondResource. Each resource has its own methods, but both of them have structural similarities that cause some methods to have the same implementation, and those operations deal with some attribute resource_attr. The initialization logic for resource_attr is different for each resource.
Maybe this could be an implementation option to solve this situation:
class BaseResource(object):
def common_operation_a(self):
# Use self.resource_attr to do Operation A
def common_operation_b(self):
# Use self.resource_attr to do Operation B
class FirstResource(BaseResource):
def __init__(self):
self.resource_attr = initialize_attr_for_first_resource()
# ... FirstResource specific operations
class SecondResource(BaseResource):
def __init__(self):
self.resource_attr = initialize_attr_for_second_resource()
# ... SecondResource specific methods
Then you could use this implementation as follows:
# Create resources objects
first_resource_instance = FirstResource()
second_resource_instance = SecondResource()
# The resources share the implementation of Operation A
first_resource_instance.common_operation_a()
second_resource_instance.common_operation_a()
# The following implementations are resource-specific
first_resource_instance.some_operation_c()
second_resource_instance.some_operation_d()
Inheritance in Python uses the same paradigms as Object oriented inheritance. Being an interpreted language, there is a lot you can get away with, but that doesn't mean your classes shouldn't be designed with the same care as if you were writing compiled languages like Java and C++.

Python: add a parent class to a class after initial evaluation

General Python Question
I'm importing a Python library (call it animals.py) with the following class structure:
class Animal(object): pass
class Rat(Animal): pass
class Bat(Animal): pass
class Cat(Animal): pass
...
I want to add a parent class (Pet) to each of the species classes (Rat, Bat, Cat, ...); however, I cannot change the actual source of the library I'm importing, so it has to be a run time change.
The following seems to work:
import animals
class Pet(object): pass
for klass in (animals.Rat, animals.Bat, animals.Cat, ...):
klass.__bases__ = (Pet,) + klass.__bases__
Is this the best way to inject a parent class into an inheritance tree in Python without making modification to the source definition of the class to be modified?
Motivating Circumstances
I'm trying to graft persistence onto the a large library that controls lab equipment. Messing with it is out of the question. I want to give ZODB's Persistent a try. I don't want to write the mixin/facade wrapper library because I'm dealing with 100+ classes and lots of imports in my application code that would need to be updated. I'm testing options by hacking on my entry point only: setting up the DB, patching as shown above (but pulling the species classes w/ introspection on the animals module instead of explicit listing) then closing out the DB as I exit.
Mea Culpa / Request
This is an intentionally general question. I'm interested in different approaches to injecting a parent and comments on the pros and cons of those approaches. I agree that this sort of runtime chicanery would make for really confusing code. If I settle on ZODB I'll do something explicit. For now, as a regular user of python, I'm curious about the general case.
Your method is pretty much how to do it dynamically. The real question is: What does this new parent class add? If you are trying to insert your own methods in a method chain that exists in the classes already there, and they were not written properly, you won't be able to; if you are adding original methods (e.g. an interface layer), then you could possibly just use functions instead.
I am one who embraces Python's dynamic nature, and would have no problem using the code you have presented. Make sure you have good unit tests in place (dynamic or not ;), and that modifying the inheritance tree actually lets you do what you need, and enjoy Python!
You should try really hard not to do this. It is strange, and will likely end in tears.
As #agf mentions, you can use Pet as a mixin. If you tell us more about why you want to insert a parent class, we can help you find a nicer solution.

"Interfaces" in Python: Yea or Nay? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
So I'm starting a project using Python after spending a significant amount of time in static land. I've seen some projects that make "interfaces" which are really just classes without any implementations. Before, I'd scoff at the idea and ignore that section of those projects. But now, I'm beginning to warm up to the idea.
Just so we're clear, an interface in Python would look something like this:
class ISomething(object):
def some_method():
pass
def some_other_method(some_argument):
pass
Notice that you aren't passing self to any of the methods, thus requiring that the method be overriden to be called. I see this as a nice form of documentation and completeness testing.
So what is everyone here's opinion on the idea? Have I been brainwashed by all the C# programming I've done, or is this a good idea?
I'm not sure what the point of that is. Interfaces (of this form, anyway) are largely to work around the lack of multiple inheritance. But Python has MI, so why not just make an abstract class?
class Something(object):
def some_method(self):
raise NotImplementedError()
def some_other_method(self, some_argument):
raise NotImplementedError()
In Python 2.6 and later, you can use abstract base classes instead. These are useful, because you can then test to see if something implements a given ABC by using "isinstance". As usual in Python, the concept is not as strictly enforced as it would be in a strict language, but it's handy. Moreover there are nice idiomatic ways of declaring abstract methods with decorators - see the link above for examples.
There are some cases where interfaces can be very handy. Twisted makes fairly extensive use of Zope interfaces, and in a project I was working on Zope interfaces worked really well. Enthought's traits packaged recently added interfaces, but I don't have any experience with them.
Beware overuse though -- duck typing and protocols are a fundamental aspect of Python, only use interfaces if they're absolutely necessary.
The pythonic way is to "Ask for forgiveness rather than receive permission". Interfaces are all about receiving permission to perform some operation on an object. Python prefers this:
def quacker(duck):
try:
duck.quack():
except AttributeError:
raise ThisAintADuckException
I don't think interfaces would add anything to the code environment.
Method definition enforcing happens without them. If an object expected to be have like Foo and have method bar(), and it does't, it will throw an AttributeError.
Simply making sure an interface method gets defined doesn't guarantee its correctness; behavioral unit tests need to be in place anyway.
It's just as effective to write a "read this or die" page describing what methods your object needs to have to be compatible with what you're plugging it in as having elaborate docstrings in an interface class, since you're probably going to have tests for it anyway. One of those tests can be standard for all compatible objects that will check the invocation and return type of each base method.
Seems kind of unnecessary to me - when I'm writing classes like that I usually just make the base class (your ISomething) with no methods, and mention in the actual documentation which methods subclasses are supposed to override.
You can create an interface in a dynamically typed language, but there's no enforcement of the interface at compile time. A statically typed language's compiler will warn you if you forget to implement (or mistype!) a method of an interface. Since you receive no such help in a dynamically typed language, your interface declaration serves only as documentation. (Which isn't necessarily bad, it's just that your interface declaration provides no runtime advantage versus writing comments.)
I'm about to do something similar with my Python project, the only things I would add are:
Extra long, in-depth doc strings for each interface and all the abstract methods.
I would add in all the required arguments so there's a definitive list.
Raise an exception instead of 'pass'.
Prefix all methods so they are obviously part of the interface - interface Foo: def foo_method1()
I personally use interfaces a lot in conjunction with the Zope Component Architecture (ZCA). The advantage is not so much to have interfaces but to be able to use them with adapters and utilities (singletons).
E.g. you could create an adapter which can take a class which implements ISomething but adapts it to the some interface ISomethingElse. Basically it's a wrapper.
The original class would be:
class MyClass(object):
implements(ISomething)
def do_something(self):
return "foo"
Then imagine interface ISomethingElse has a method do_something_else(). An adapter could look like this:
class SomethingElseAdapter(object):
implements(ISomethingElse)
adapts(ISomething)
def __init__(self, context):
self.context = context
def do_something_else():
return self.context.do_something()+"bar"
You then would register that adapter with the component registry and you could then use it like this:
>>> obj = MyClass()
>>> print obj.do_something()
"foo"
>>> adapter = ISomethingElse(obj)
>>> print adapter.do_something_else()
"foobar"
What that gives you is the ability to extend the original class with functionality which the class does not provide directly. You can do that without changing that class (it might be in a different product/library) and you could simply exchange that adapter by a different implementation without changing the code which uses it. It's all done by registration of components in initialization time.
This of course is mainly useful for frameworks/libraries.
I think it takes some time to get used to it but I really don't want to live without it anymore. But as said before it's also true that you need to think exactly where it makes sense and where it doesn't. Of course interfaces on it's own can also already be useful as documentation of the API. It's also useful for unit tests where you can test if your class actually implements that interface. And last but not least I like starting by writing the interface and some doctests to get the idea of what I am actually about to code.
For more information you can check out my little introduction to it and there is a quite extensive description of it's API.
Glyph Lefkowitz (of Twisted fame) just recently wrote an article on this topic. Personally I do not feel the need for interfaces, but YMMV.
Have you looked at PyProtocols? it has a nice interface implementation that you should look at.

Categories

Resources