Selecting executed method of class at runtime in python? - python

This question is very generic but I don't think it is opinion based. It is about software design and the example prototype is in python:
I am writing a program which goal it is to simulate some behaviour (doesn't matter). The data on which the simulation works is fixed, but the simulated behaviour I want to change at every startup time. The simulation behaviour can't be changed at runtime.
Example:
Simulation behaviour is defined like:
usedMethod = static
The program than looks something like this:
while(true)
result = static(object) # static is the method specified in the behaviour
# do something with result
The question is, how is the best way to deal with exchangeable defined functions? So another run of the simulation could look like this
while(true)
result = dynamic(object)
if dynamic is specified as usedMethod. The first thing that came in my mind was an if-else block, where I ask, which is the used method and then execute this on. This solution would not be very good, because every time I add new behaviour I have to change the if-else block and the if-else block itself would maybe cost performance, which is important, too. The simulations should be fast.
So a solution I could think of was using a function pointer (output and input of all usedMethods should be well defined and so it should not be a problem). Then I initalize the function pointer at startup, where the used method is defined.
The problem I currently have, that the used method is not a function per-se, but is a method of a class, which depends heavily on the intern members of this class, so the code is more looking like this:
balance = BalancerClass()
while(true)
result = balance.static(object)
...
balance.doSomething(input)
So my question is, what is a good solution to deal with this problem?
I thought about inheriting from the balancerClass (this would then be an abstract class, I don't know if this conecpt exists in python) and add a derived class for every used method. Then I create the correct derived object which is specified in the simulation behaviour an run-time.
In my eyes, this is a good solution, because it encapsulates the methods from the base class itself. And every used method is managed by its own class, so it can add new internal behaviour if needed.
Furthermore the doSomething method shouldn't change, so therefore it is implemented the base class, but depends on the intern changed members of the derived class.
I don't know in general if this software design is good to solve my problem or if I am missing a very basic and easy concept.
If you have a another/better solution please tell me and it would be good, if you provide the advantages/disadvantages. Also could you tell me advantages/disadvantages of my solution, which I didn't think of?

Hey I can be wrong but what you are looking for boils down to either dependency injection or strategy design pattern both of which solve the problem of executing dynamic code at runtime via a common interface without worrying about the actual implementations. There are also much simpler ways just like u desrcibed creating an abstract class(Interface) and having all the classes implement this interface.
I am giving brief examples fo which here for your reference:
Dependecy Injection(From wikipedia):
In software engineering, dependency injection is a technique whereby one object supplies the dependencies of another object. A "dependency" is an object that can be used, for example as a service. Instead of a client specifying which service it will use, something tells the client what service to use. The "injection" refers to the passing of a dependency (a service) into the object (a client) that would use it. The service is made part of the client's state.
Passing the service to the client, rather than allowing a client to build or find the service, is the fundamental requirement of the pattern.
Python does not have such a conecpt inbuilt in the language itself but there are packages out there that implements this pattern.
Here is a nice article about this in python(All credits to the original author):
Dependency Injection in Python
Strategy Pattern: This is an anti-pattern to inheritance and is an example of composition which basically means instead of inheriting from a base class we pass the required class's object to the constructor of classes we want to have the functionality in. For example:
Suppose you want to have a common add() operation but it can be implemented in different ways(add two numbers or add two strings)
Class XYZ():
def __constructor__(adder):
self.adder = adder
The only condition being all adders passed to the XYZ class should have a common Interface.
Here is a more detailed example:
Strategy Pattern in Python
Interfaces:
Interfaces are the simplest, they define a set of common attributes and methods(with or without a default implementation). Any class then can implement an interface with its own functionality or some shared common functionality. In python Interfaces are implemented via abc package.

Related

Python/Django and services as classes

Are there any conventions on how to implement services in Django? Coming from a Java background, we create services for business logic and we "inject" them wherever we need them.
Not sure if I'm using python/django the wrong way, but I need to connect to a 3rd party API, so I'm using an api_service.py file to do that. The question is, I want to define this service as a class, and in Java, I can inject this class wherever I need it and it acts more or less like a singleton. Is there something like this I can use with Django or should I build the service as a singleton and get the instance somewhere or even have just separate functions and no classes?
TL;DR It's hard to tell without more details but chances are you only need a mere module with a couple plain functions or at most just a couple simple classes.
Longest answer:
Python is not Java. You can of course (technically I mean) use Java-ish designs, but this is usually not the best thing to do.
Your description of the problem to solve is a bit too vague to come with a concrete answer, but we can at least give you a few hints and pointers (no pun intended):
1/ Everything is an object
In python, everything (well, everything you can find on the RHS of an assignment that is) is an object, including modules, classes, functions and methods.
One of the consequences is that you don't need any complex framework for dependency injection - you just pass the desired object (module, class, function, method, whatever) as argument and you're done.
Another consequence is that you don't necessarily need classes for everything - a plain function or module can be just enough.
A typical use case is the strategy pattern, which, in Python, is most often implemented using a mere callback function (or any other callable FWIW).
2/ a python module is a singleton.
As stated above, at runtime a python module is an object (of type module) whose attributes are the names defined at the module's top-level.
Except for some (pathological) corner cases, a python module is only imported once for a given process and is garanteed to be unique. Combined with the fact that python's "global" scope is really only "module-level" global, this make modules proper singletons, so this design pattern is actually already builtin.
3/ a python class is (almost) a singleton
Python classes are objects too (instance of type type, directly or indirectly), and python has classmethods (methods that act on the class itself instead of acting on the current instance) and class-level attributes (attributes that belong to the class object itself, not to it's instances), so if you write a class that only has classmethods and class attributes, you technically have a singleton - and you can use this class either directly or thru instances without any difference since classmethods can be called on instances too.
The main difference here wrt/ "modules as singletons" is that with classes you can use inheritance...
4/ python has callables
Python has the concept of "callable" objects. A "callable" is an object whose class implements the __call__() operator), and each such object can be called as if it was a function.
This means that you can not only use functions as objects but also use objects as functions - IOW, the "functor" pattern is builtin. This makes it very easy to "capture" some context in one part of the code and use this context for computations in another part.
5/ a python class is a factory
Python has no new keyword. Pythonc classes are callables, and instanciation is done by just calling the class.
This means that you can actually use a class or function the same way to get an instance, so the "factory" pattern is also builtin.
6/ python has computed attributes
and beside the most obvious application (replacing a public attribute by a pair of getter/setter without breaking client code), this - combined with other features like callables etc - can prove to be very powerful. As a matter of fact, that's how functions defined in a class become methods
7/ Python is dynamic
Python's objects are (usually) dict-based (there are exceptions but those are few and mostly low-level C-coded classes), which means you can dynamically add / replace (and even remove) attributes and methods (since methods are attributes) on a per-instance or per-class basis.
While this is not a feature you want to use without reasons, it's still a very powerful one as it allows to dynamically customize an object (remember that classes are objects too), allowing for more complex objects and classes creation schemes than what you can do in a static language.
But Python's dynamic nature goes even further - you can use class decorators and/or metaclasses to taylor the creation of a class object (you may want to have a look at Django models source code for a concrete example), or even just dynamically create a new class using it's metaclass and a dict of functions and other class-level attributes.
Here again, this can really make seemingly complex issues a breeze to solve (and avoid a lot of boilerplate code).
Actually, Python exposes and lets you hook into most of it's inners (object model, attribute resolution rules, import mechanism etc), so once you understand the whole design and how everything fits together you really have the hand on most aspects of your code at runtime.
Python is not Java
Now I understand that all of this looks a bit like a vendor's catalog, but the point is highlight how Python differs from Java and why canonical Java solutions - or (at least) canonical Java implementations of those solutions - usually don't port well to the Python world. It's not that they don't work at all, just that Python usually has more straightforward (and much simpler IMHO) ways to implement common (and less common) design patterns.
wrt/ your concrete use case, you will have to post a much more detailed description, but "connecting to a 3rd part API" (I assume a REST api ?) from a Django project is so trivial that it really doesn't warrant much design considerations by itself.
In Python you can write the same as Java program structure. You don't need to be so strongly typed but you can. I'm using types when creating common classes and libraries that are used across multiple scripts.
Here you can read about Python typing
You can do the same here in Python. Define your class in package (folder) called services
Then if you want singleton you can do like that:
class Service(object):
instance = None
def __new__(cls):
if cls.instance is not None:
return cls.instance
else:
inst = cls.instance = super(Service, cls).__new__()
return inst
And now you import it wherever you want in the rest of the code
from services import Service
Service().do_action()
Adding to the answer given by bruno desthuilliers and TreantBG.
There are certain questions that you can ask about the requirements.
For example one question could be, does the api being called change with different type of objects ?
If the api doesn't change, you will probably be okay with keeping it as a method in some file or class.
If it does change, such that you are calling API 1 for some scenario, API 2 for some and so on and so forth, you will likely be better off with moving/abstracting this logic out to some class (from a better code organisation point of view).
PS: Python allows you to be as flexible as you want when it comes to code organisation. It's really upto you to decide on how you want to organise the code.

Advantages of using static methods over instance methods in python

My IDE keeps suggesting I convert my instance methods to static methods. I guess because I haven't referenced any self within these methods.
An example is :
class NotificationViewSet(NSViewSet):
def pre_create_processing(self, request, obj):
log.debug(" creating messages ")
# Ensure data is consistent and belongs to the sending bot.
obj['user_id'] = request.auth.owner.id
obj['bot_id'] = request.auth.id
So my question would be: do I lose anything by just ignoring the IDE suggestions, or is there more to it?
This is a matter of workflow, intentions with your design, and also a somewhat subjective decision.
First of all, you are right, your IDE suggests converting the method to a static method because the method does not use the instance. It is most likely a good idea to follow this suggestion, but you might have a few reasons to ignore it.
Possible reasons to ignore it:
The code is soon to be changed to use the instance (on the other hand, the idea of soon is subjective, so be careful)
The code is legacy and not entirely understood/known
The interface is used in a polymorphic/duck typed way (e.g. you have a collection of objects with this method and you want to call them in a uniform way, but the implementation in this class happens to not need to use the instance - which is a bit of a code smell)
The interface is specified externally and cannot be changed (this is analog to the previous reason)
The AST of the code is read/manipulated either by itself or something that uses it and expects this method to be an instance method (this again is an external dependency on the interface)
I'm sure there can be more, but failing these types of reasons I would follow the suggestion. However, if the method does not belong to the class (e.g. factory method or something similar), I would refactor it to not be part of the class.
I think that you might be mixing up some terminology - the example is not a class method. Class methods receive the class as the first argument, they do not receive the instance. In this case you have a normal instance method that is not using its instance.
If the method does not belong in the class, you can move it out of the class and make it a standard function. Otherwise, if it should be bundled as part of the class, e.g. it's a factory function, then you should probably make it a static method as this (at a minimum) serves as useful documentation to users of your class that the method is coupled to the class, but not dependent on it's state.
Making the method static also has the advantage this it can be overridden in subclasses of the class. If the method was moved outside of the class as a regular function then subclassing is not possible.

Best practice - accessing object variables

I've been making a lot of classes an Python recently and I usually just access instance variables like this:
object.variable_name
But often I see that objects from other modules will make wrapper methods to access variables like this:
object.getVariable()
What are the advantages/disadvantages to these different approaches and is there a generally accepted best practice (even if there are exceptions)?
There should never be any need in Python to use a method call just to get an attribute. The people who have written this are probably ex-Java programmers, where that is idiomatic.
In Python, it's considered proper to access the attribute directly.
If it turns out that you need some code to run when accessing the attribute, for instance to calculate it dynamically, you should use the #property decorator.
The main advantages of "getters" (the getVariable form) in my modest opinion is that it's much easier to add functionality or evolve your objects without changing the signatures.
For instance, let's say that my object changes from implementing some functionality to encapsulating another object and providing the same functionality via Proxy Pattern (composition). If I'm using getters to access the properties, it doesn't matter where that property is being fetched from, and no change whatsoever is visible to the "clients" using your code.
I use getters and such methods especially when my code is being reused (as a library for instance), by others. I'm much less picky when my code is self-contained.
In Java this is almost a requirement, you should never access your object fields directly. In Python it's perfectly legitimate to do so, but you may take in consideration the possible benefits of encapsulation that I mentioned. Still keep in mind that direct access is not considered bad form in Python, on the contrary.
making getVariable() and setVariable() methods is called enncapsulation.
There are many advantages to this practice and it is the preffered style in object-oriented programming.
By accessing your variables through methods you can add another layer of "error checking/handling" by making sure the value you are trying to set/get is correct.
The setter method is also used for other tasks like notifying listeners that the variable have changed.
At least in java/c#/c++ and so on.

Python: add a parent class to a class after initial evaluation

General Python Question
I'm importing a Python library (call it animals.py) with the following class structure:
class Animal(object): pass
class Rat(Animal): pass
class Bat(Animal): pass
class Cat(Animal): pass
...
I want to add a parent class (Pet) to each of the species classes (Rat, Bat, Cat, ...); however, I cannot change the actual source of the library I'm importing, so it has to be a run time change.
The following seems to work:
import animals
class Pet(object): pass
for klass in (animals.Rat, animals.Bat, animals.Cat, ...):
klass.__bases__ = (Pet,) + klass.__bases__
Is this the best way to inject a parent class into an inheritance tree in Python without making modification to the source definition of the class to be modified?
Motivating Circumstances
I'm trying to graft persistence onto the a large library that controls lab equipment. Messing with it is out of the question. I want to give ZODB's Persistent a try. I don't want to write the mixin/facade wrapper library because I'm dealing with 100+ classes and lots of imports in my application code that would need to be updated. I'm testing options by hacking on my entry point only: setting up the DB, patching as shown above (but pulling the species classes w/ introspection on the animals module instead of explicit listing) then closing out the DB as I exit.
Mea Culpa / Request
This is an intentionally general question. I'm interested in different approaches to injecting a parent and comments on the pros and cons of those approaches. I agree that this sort of runtime chicanery would make for really confusing code. If I settle on ZODB I'll do something explicit. For now, as a regular user of python, I'm curious about the general case.
Your method is pretty much how to do it dynamically. The real question is: What does this new parent class add? If you are trying to insert your own methods in a method chain that exists in the classes already there, and they were not written properly, you won't be able to; if you are adding original methods (e.g. an interface layer), then you could possibly just use functions instead.
I am one who embraces Python's dynamic nature, and would have no problem using the code you have presented. Make sure you have good unit tests in place (dynamic or not ;), and that modifying the inheritance tree actually lets you do what you need, and enjoy Python!
You should try really hard not to do this. It is strange, and will likely end in tears.
As #agf mentions, you can use Pet as a mixin. If you tell us more about why you want to insert a parent class, we can help you find a nicer solution.

"Interfaces" in Python: Yea or Nay? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
So I'm starting a project using Python after spending a significant amount of time in static land. I've seen some projects that make "interfaces" which are really just classes without any implementations. Before, I'd scoff at the idea and ignore that section of those projects. But now, I'm beginning to warm up to the idea.
Just so we're clear, an interface in Python would look something like this:
class ISomething(object):
def some_method():
pass
def some_other_method(some_argument):
pass
Notice that you aren't passing self to any of the methods, thus requiring that the method be overriden to be called. I see this as a nice form of documentation and completeness testing.
So what is everyone here's opinion on the idea? Have I been brainwashed by all the C# programming I've done, or is this a good idea?
I'm not sure what the point of that is. Interfaces (of this form, anyway) are largely to work around the lack of multiple inheritance. But Python has MI, so why not just make an abstract class?
class Something(object):
def some_method(self):
raise NotImplementedError()
def some_other_method(self, some_argument):
raise NotImplementedError()
In Python 2.6 and later, you can use abstract base classes instead. These are useful, because you can then test to see if something implements a given ABC by using "isinstance". As usual in Python, the concept is not as strictly enforced as it would be in a strict language, but it's handy. Moreover there are nice idiomatic ways of declaring abstract methods with decorators - see the link above for examples.
There are some cases where interfaces can be very handy. Twisted makes fairly extensive use of Zope interfaces, and in a project I was working on Zope interfaces worked really well. Enthought's traits packaged recently added interfaces, but I don't have any experience with them.
Beware overuse though -- duck typing and protocols are a fundamental aspect of Python, only use interfaces if they're absolutely necessary.
The pythonic way is to "Ask for forgiveness rather than receive permission". Interfaces are all about receiving permission to perform some operation on an object. Python prefers this:
def quacker(duck):
try:
duck.quack():
except AttributeError:
raise ThisAintADuckException
I don't think interfaces would add anything to the code environment.
Method definition enforcing happens without them. If an object expected to be have like Foo and have method bar(), and it does't, it will throw an AttributeError.
Simply making sure an interface method gets defined doesn't guarantee its correctness; behavioral unit tests need to be in place anyway.
It's just as effective to write a "read this or die" page describing what methods your object needs to have to be compatible with what you're plugging it in as having elaborate docstrings in an interface class, since you're probably going to have tests for it anyway. One of those tests can be standard for all compatible objects that will check the invocation and return type of each base method.
Seems kind of unnecessary to me - when I'm writing classes like that I usually just make the base class (your ISomething) with no methods, and mention in the actual documentation which methods subclasses are supposed to override.
You can create an interface in a dynamically typed language, but there's no enforcement of the interface at compile time. A statically typed language's compiler will warn you if you forget to implement (or mistype!) a method of an interface. Since you receive no such help in a dynamically typed language, your interface declaration serves only as documentation. (Which isn't necessarily bad, it's just that your interface declaration provides no runtime advantage versus writing comments.)
I'm about to do something similar with my Python project, the only things I would add are:
Extra long, in-depth doc strings for each interface and all the abstract methods.
I would add in all the required arguments so there's a definitive list.
Raise an exception instead of 'pass'.
Prefix all methods so they are obviously part of the interface - interface Foo: def foo_method1()
I personally use interfaces a lot in conjunction with the Zope Component Architecture (ZCA). The advantage is not so much to have interfaces but to be able to use them with adapters and utilities (singletons).
E.g. you could create an adapter which can take a class which implements ISomething but adapts it to the some interface ISomethingElse. Basically it's a wrapper.
The original class would be:
class MyClass(object):
implements(ISomething)
def do_something(self):
return "foo"
Then imagine interface ISomethingElse has a method do_something_else(). An adapter could look like this:
class SomethingElseAdapter(object):
implements(ISomethingElse)
adapts(ISomething)
def __init__(self, context):
self.context = context
def do_something_else():
return self.context.do_something()+"bar"
You then would register that adapter with the component registry and you could then use it like this:
>>> obj = MyClass()
>>> print obj.do_something()
"foo"
>>> adapter = ISomethingElse(obj)
>>> print adapter.do_something_else()
"foobar"
What that gives you is the ability to extend the original class with functionality which the class does not provide directly. You can do that without changing that class (it might be in a different product/library) and you could simply exchange that adapter by a different implementation without changing the code which uses it. It's all done by registration of components in initialization time.
This of course is mainly useful for frameworks/libraries.
I think it takes some time to get used to it but I really don't want to live without it anymore. But as said before it's also true that you need to think exactly where it makes sense and where it doesn't. Of course interfaces on it's own can also already be useful as documentation of the API. It's also useful for unit tests where you can test if your class actually implements that interface. And last but not least I like starting by writing the interface and some doctests to get the idea of what I am actually about to code.
For more information you can check out my little introduction to it and there is a quite extensive description of it's API.
Glyph Lefkowitz (of Twisted fame) just recently wrote an article on this topic. Personally I do not feel the need for interfaces, but YMMV.
Have you looked at PyProtocols? it has a nice interface implementation that you should look at.

Categories

Resources