python, class renaming vs inheriting [duplicate] - python

This question already has answers here:
How to set class names dynamically?
(3 answers)
Closed 5 years ago.
Would it be correct to say that if some python class is well written,
then
class Subclass(BaseClass):
pass
Should be sufficient to create a well behaved class with similar behavior to that of BaseClass?
(I am writing similar and not identical because for example)
SubClass.name or BaseClass.qualname would not be the same as their counterparts in BaseClass and this would possibly (probably) also extend to str and repr and possibly other metadata.
Would It make sense to use such "empty" inheritance to do class renaming for better semantics e.g. would you inherit collections.Countr to call it GuestsCount if you want to Count how many Adults / Children / Babies will be attending some event? or call it a "Polinomial" and use the count values to represent coefficients of some class that would represent variables to some power ( i.e. X^2 or Y^3 ) and so on ?
EDIT:
I don't see how my Q is even related to dynamic renaming of class in any way AS IS.
I am talking about inheritance v.s. aliasing (or possibly just instantiating) but not about renaming an existing class dynamically nor about dynamically creating classes and issues related to how to name those dynamically created classes as discussed in the so called duplicate mentioned here :(

Although it is not common, I think it does makes sense. I'd refer you to this article by Robert Martin. Especially the last paragraph supports your rationale. Although the article deals with renaming functions, the same arguments could hold for renaming classes.
Additionally, concepts as different as PersonCounter and Polynomial will most likely soon diverge in terms of functionality too, although they start from the same class, so it makes sense to make them different classes.
Note: A closely related, common pattern in python frameworks, is subclasses that have only one class attribute
class GuestCounter(Counter):
datatype=Person
class Polynomial(Counter):
datatype=float
which could be useful if you create factory functions/type checkers/adapter functions for your objects

An additional advantage is that you could add attributes to the SubClass,
while it might not be possible for the BaseClass (dict or list for example).

It might make sense if you actually get better semantics, but in your examples you do not.
Having AdultCounter and ChildCounter suggests that counting adults is somewhat different than counting children, which is false and misleading. Not because they happen to share the same implementation (as explained in the Uncle Bob`s article linked in other answer, that would be fine) but because they are conceptually the same. Counting abstracts over any attributes of the items counted. Counting adults, children, sheep or a mix of them, it is all the same.
adults = AdultCounter({ev: ev.num_adults() for ev in events})
children = ChildCounter({ev: ev.num_children() for ev in events})
The ocasional reader would wonder what do these counters do that a bare Counter does not. After They would have to look at the definitions to find out the answer: nothing. So what's the point? The part that is actually different, the filtering, is done outside the counters.
And for polyomies, a Counter does not look like a good abstraction. Why would mypoly.most_common() return the term with the highest coefficient? Why does poly1 + poly2 work while 2 * poly does not and poly1 - poly2 is buggy?

Related

how to use abstract data types in python?

i want to know how the abstract data types works in python ! because my teacher gave us a project and said that we shall use it . we have to do 3 minors functions that we will use in the other 4 mains functions( the most important ones).
what i want to understand is this:
-- the techer said if we used, for example , lists in ours minors functions the code should run well if he changes the interior of the
minor functions to tuples or dictionaires ( for example ) ...
and i don t know how it is supposed to work , can u explain me ? give a simple example ?
In object oriented programming, an abstract class is like a normal class that cannot be instantiated.
It's a way for the class designer to provide a blueprint of a class, so that it's methods have to be implemented by the developer writing a class that inherits from it.
Now, for Abstract Data Types, according to wikipedia
An abstract data type is defined as a mathematical model of the data objects that make up a data type as well as the functions that operate on these objects. There are no standard conventions for defining them. A broad division may be drawn between "imperative" and "functional" definition styles.
As you can see, abstract pretty much means blueprints, not actual implementations, although in Java for example, an Abstract Class can have method bodies, i.e. implementation of the methods, just cannot be instantiated.
Furthermore, in Python, an abstract data type is one which you would make yourself.
Take for example a list and a hashset, they both form an abstract data type dictionary even though in python it would appear as a built in.
Abstraction is the technique in which you can make abstract data types or it can be viewed as a concept rather than a data type.
More useful information on geeks for geeks

Call the same method in all objects in Python?

Long story short, I need to find a shortcut to calling the same method in multiple objects which were made from multiple classes.
Now, all of the classes have the same parent class, and even though the method differs bewteen the different classes, I figured the methods be the same name would work. So I though I might just be able to do something like this:
for object in listOfObjects:
object.method()
It hasn't worked. It might very well be a misspelling by me, but I can't find it. I think I could solve it by making a list that only adds the objects I need, but that would require a lot of coding, including changing other classes.
~~ skip to last paragraph for pseudo code accurately describing what I need~~
At this point, I will begin to go more in detail as to what specifically I am doing. I hope that it will better illustrate the scope of my question, and that the answering of my question will be more broadly applicable. The more general usage of this question are above, but this might help answer the question. Please be aware that I will change the question once I get an answer to more closely represent what I need done, so that it can apply to a wide variety of problems.
I am working on a gravity simulator. Whereas most simulators make objects which interact with one another and represent full bodies where their center of gravity is the actual attraction point, I am attempting to write a program which will simulate the distribution of gravity across all given points within an object.
As such, each object(not in programming terms, in literal terms) is made up of a bunch of tiny objects (both literally and figuratively). Essentially, what I am trying to do is call the object.gravity() method, which essentially takes into account all of the gravity from all other objects in the simulation and then moves the position of this particular object based on that input.
Now, either due to a syntactical bug (which I kinda doubt) or due to Python's limitations, I am unable to get all of the particles to behave properly all at once. The code snippet I posted before doesn't seem to be working.
tl;dr:
As such, I am wondering if there is a way (save adding all objects to a list and then iterating through it) to simply call the .gravity() method on every object that has the method. basically, even though this is sort of list format, this is what I want to do:
for ALL_OBJECTS:
if OBJECT has .gravity():
OBJECT.gravity()
You want the hasattr() function here:
for obj in all_objects:
if hasattr(obj, 'gravity'):
obj.gravity()
or, if the gravity method is defined by a specific parent class, you can test for that too:
for obj in all_objects:
if isinstance(obj, Planet):
obj.gravity()
Can also do ... better pythonic way to do it
for obj in all_objects:
try:
obj.gravity()
except AttributeError:
pass
Using getattr while set default option of getattr to lambda: None.
for obj in all_objects:
getattr(obj, 'gravity', lambda: None)()

Pythonic way to add several methods to several classes?

What is the most pythonic way to add several identical methods to several classes?
I could use a class decorator, but that seems to bring in a fair bit of complication and its harder to write and read than the other methods.
I could make a base class with all the methods and let the other classes inherit, but then for some of the classes I would be very tempted to allow multiple inheritance, which I have read frequently is to be avoided or minimized. Also, the "is-a" relationship does not apply.
I could also change them from being methods to make them stand-alone functions which just expect their values to supply the appropriate properties through duck-typing. This is in some ways clean, but it is less object oriented and makes it less clear when a function could be used on that type of object.
I could use delegation, but this requires all of the classes that want to call to have methods calling up to the helper classes methods. This would make the code base much longer than the other options and require adding a method to delegate every time I want to add a new function to the helper class.
I know giving one class an instance of the other as an attribute works nicely in some cases, but it does not always work cleanly and can make calls more complicated than they would be otherwise.
After playing around with it a bit, I am leaning towrds inheritance even when it leads to multiple inheritance. But I hesitate due to numerous texts warning very strongly against ever allowing multiple inheritance and some (such as the wikipedia entry) going so far as to say that inheritance just for code reuse such as this should be minimized.
This may be more clear with an example, so for a simplified example say we are dealing with numerous distinct classes which all have a location on an x, y grid. There are a lot of operations we might want to make methods of everything with an x, y location, such as a method to get the distance between two such entities, or the relative direction, midpoint between them, etc.
What would be the most pythonic way to give all such classes access to these methods that rely only on having x and y as attributes?
For your specific example, I would try to take advantage of duck-typing. Write plain simple functions that take objects which are assumed to have x and y attributes:
def distance(a, b):
"""
Returns distance between `a` and `b`.
`a` and `b` should have `x` and `y` attributes.
"""
return math.sqrt((a.x-b.x)**2 + (a.y-b.y)**2)
Very simple. To make it clear how the function can be used, just document it.
Plain old functions are best for this problem. E.g. instead of this ...
class BaseGeoObject:
def distanceFromZeroZero(self):
return math.sqrt(self.x()**2 + self.y()**2)
...
... just have functions like this one:
def distanceFromZeroZero(point):
return math.sqrt(point.x()**2 + point.y()**2)
This is a good solution because it's also easy to test - it's not necessary to subclass just to exercise a specific function.

pros and cons of using factory vs regular constructor

(Using Python 3.2, though I doubt it matters.)
I have class Data, class Rules, and class Result. I use lowercase to denote an instance of the class.
A rules object contains rules that, if applied to a data object, can create a result object.
I'm deciding where to put the (rather complicated and evolving) code that actually applies the rules to the data. I can see two choices:
Put that code inside a class Result method, say parse_rules. Result constructor would take as an argument a rules object, and pass it onto self.parse_rules.
Put that code inside a new class ResultFactory. ResultFactory would be a singleton class, which has a method, say build_result, which takes rules as an argument and returns a newly built result object.
What are the pros and cons of the two approaches?
The GRASP design principles provide guidelines for assigning responsibility to classes and objects in object-oriented design. For example, the Creator pattern suggests: In general, a class B should be responsible for creating instances of class A if one, or preferably more, of the following apply:
Instances of B contains or compositely aggregates instances of A
Instances of B record instances of A
Instances of B closely use instances of A
Instances of B have the initializing information for instances of A and pass it on creation.
In your example, you have complicated and evolving code for applying rules to data. That suggests the use of the Factory Pattern.
Putting the code in Results is contraindicated because 1) results don't create results, and 2) results aren't the information expert (i.e. they don't have most of the knowledge that is needed).
In short, the ResultFactory seems like a reasonable place to concentrate the knowledge of how to apply rules to data to generate results. If you were to try to push all of this logic into class constructors for either Results or Rules, it would lead to tight coupling and loss of cohesion.
Third scenario:
You may want to consider a third scenario:
Put the code inside the method Rules.__call__.
Instantiating Result like: result = rules(data)
Pros:
Results can be totally unaware of the Rules that generates them (and maybe even of the original Data).
Every Rules sub-class can customize its Result creation.
It feels natural (to me): Rules applied to Data yield Result.
And you'll have a couple of GRASP principle on your side:
Creator: Instances of Rules have the initializing information for instances of Result and pass it on creation.
Information Expert: Information Expert will lead to placing the responsibility on the class with the most information required to fulfill it.
Side effects:
Coupling: You'll raise the coupling between Rules and Data:
You need to pass the whole data set to every Rules
Which means that each Rules should be able to decide on which data it'll be applied.
Why not put the rules in their own classes? If you create a RuleBase class, then each rule can derive from it. This way, polymorphism can be used when Data needs rules applied. Data doesn't need to know or care which Rule instances were applied (unless Data itself is the one who knows which rules should be applied).
When rules need to be invoked, a data instance can all RuleBase.ExecuteRules() and pass itself in as an argument. The correct subclass of Rule can be chosen directly from Data, if Data knows which Rule is necessary. Or some other design pattern can be used, like Chain of Responsibility, where Data invokes the pattern and lets a Result come back.
This would make a great whiteboard discussion.
Can you make ResultFactory a pure function? It's not useful to create a singleton object if all you need is a function.
Well, the second is downright silly, especially with all the singletonness. If Result requires Rules to create an instance, and you can't create one without it, it should take that as an argument to __init__. No pattern shopping necessary.

Which is "better" practice? Passing object references or object method references in Python

I'm writing a small piece of code in Python and am curious what other people think of this.
I have a few classes, each with a few methods, and am trying to determine what is "better": to pass objects through method calls, or to pass methods through method calls when only one method from an object is needed. Basically, should I do this:
def do_something(self, x, y, manipulator):
self.my_value = manipulator.process(x, y)
or this
def do_the_same_thing_but_differently(self, x, y, manipulation):
self.my_value = manipulation(x, y)
The way I see it, the second one is arguably "better" because it promotes even looser coupling/stronger cohesion between the manipulation and the other class. I'm curious to see some arguments for and against this approach for cases when only a single method is needed from an object.
EDIT: I removed the OOP wording because it was clearly upsetting. I was mostly referring to loose coupling and high cohesion.
The second solution may provide looser coupling because it is more "functional", not more "OOP". The first solution has the advantage that it works in languages like C++ which don't have closures (though one can get a similar effect using templates and pointer-to-member-functions); but in a language like Python, IMHO the 2nd alternative seems to be more "natural".
EDIT: you will find a very nice discussion of "functional vs. object oriented" techniques in the free book "Higher order Perl", available here:
http://hop.perl.plover.com/
(look into chapter 1, part 6). Though it is a Perl (and not a Python) book, the discussion there fits exactly to the question asked here, and the functional techniques described there can be applied to Python in a similar way.
I will say the second approach ; because it's definitely look like a callback which they are very used when using the Hollywood principle (don't call us we will call you) which is a paradigm that assists in the development of code with high cohesion and low coupling [Ref 2] .
I would definitely go with the second approach.
Also consider that you could change the interface of whatever Manipulator class so that process is instead spelled __call__, and then it will work transparently with the second approach.

Categories

Resources