Expected behavior of class definition in python - python

I read multiple article about OOP in python but I didn't find the answer.
here is my sample code as a example :
class Point(object):
"""basic point"""
def __init__(self, x, y):
self.x = x
self.y = y
class Circle(object):
"""basic circle object"""
def __init__(self,center,radius):
self.center = center #should be a point object
self.radius = radius
coord = Point(1, 2)
a = Circle(coord, 4)
b = Circle(4, 5)
If I understand correctly this is valid python code but the circle "b" doesn't have a Point object as the center. if there is a method in the circle object who use the center object to do a calculation (to calculate the circle's area for example) it will fail for the b object. Do I have to enforce type or is it the programmer responsibility to give a expected object type at the instantiation?

As others have said, it is up to you to enforce typing.
However, Python widely uses the concept of duck typing, which means in your case, you don't necessarily need a Point object for the center, you just need something that behaves the same as your Point class. In this simple example, Point doesn't provide any methods; it's simply a class whose objects will have x and y attributes. That means your Circle could accept any object for its center as long as it provides x and y attributes, that is, provides the same interface as Point.
This means that the most important thing to do is document what interface your class provides, and what each function or method expects from its arguments.

It is up to you to enforce types, and up to the caller to provide the proper data.
One of the underlying philosophies of the python community is that we're all responsible programmers. If it is critical that the type is enforced against accidental or malicious mistakes, you must build that into your objects.

Related

Should I repeat parent class __init__ arguments in the child class's __init__, or using **kwargs instead

Imagine a base class that you'd like to inherit from:
class Shape:
def __init__(self, x: float, y: float):
self.x = x
self.y = y
There seem to be two common patterns of handling a parent's kwargs in a child class's __init__ method.
You can restate the parent's interface completely:
class Circle(Shape):
def __init__(self, x: float, y: float, radius: float):
super().__init__(x=x, y=y)
self.radius = radius
Or you can specify only the part of the interface which is specific to the child, and hand the remaining kwargs to the parent's __init__:
class Circle(Shape):
def __init__(self, radius: float, **kwargs):
super().__init__(**kwargs)
self.radius = radius
Both of these seem to have pretty big drawbacks, so I'd be interested to hear what is considered standard or best practice.
The "restate the interface" method is appealing in toy examples like you commonly find in discussions of Python inheritance, but what if we're subclassing something with a really complicated interface, like pandas.DataFrame or logging.Logger?
Also, if the parent interface changes, I have to remember to change all of my child class's interfaces to match, type hints and all. Not very DRY.
In these cases, you're almost certain to go for the **kwargs option.
But the **kwargs option leaves the user unsure about which arguments are actually required.
In the toy example above, a user might naively write:
circle = Circle() # Argument missing for parameter "radius"
Their IDE (or mypy or Pyright) is being helpful and saying that the radius parameter is required.
circle = Circle(radius=5)
The IDE (or type checker) is now happy, but the code won't actually run:
Traceback (most recent call last):
File "foo.py", line 13, in <module>
circle = Circle(radius=5)
File "foo.py", line 9, in __init__
super().__init__(**kwargs)
TypeError: Shape.__init__() missing 2 required positional arguments: 'x' and 'y'
So I'm stuck with a choice between writing out the parent interface multiple times, and not being warned by my IDE when I'm using a child class incorrectly.
What to do?
Research
This mypy issue is loosely related to this.
This reddit thread has a good rehearsal of the relevant arguments for/against each approach I outline.
This SO question is maybe a duplicate of this one. Does the fact I'm talking about __init__ make any difference though?
I've found a real duplicate, although the answer is a bit esoteric and doesn't seem like it would qualify as best, or normal, practice.
If the parent class has required (positional) arguments (as your Shape class does), then I'd argue that you must include those arguments in the __init__ of the child (Circle) for the sake of being able to pass around "shape-like" instances and be sure that a Circle will behave like any other shape. So this would be your Circle class:
class Shape:
def __init__(x: float, y: float):
self.x = x
self.y = y
class Circle(Shape):
def __init__(x: float, y: float, radius: float):
super().__init__(x=x, y=y)
self.radius = radius
# The expectation is that this should work with all instances of `Shape`
def move_shape(shape: Shape, x: float, y: float):
shape.x = x
shape.y = y
However if the parent class is using optional kwargs, that's where stuff gets tricky. You shouldn't have to define colour: str on your Circle class just because colour is an optional argument for Shape. It's up to the developer using your Circle class to know the interface of all shapes and if need be, interrogate the code and note that Circle can accept colour=green as it passes **kwargs to its parent constructor:
class Shape:
def __init__(x: float, y: float, colour: str = "black"):
self.x = x
self.y = y
self.colour = colour
class Circle(Shape):
def __init__(x: float, y: float, radius: float, **kwargs):
super().__init__(x=x, y=y, **kwargs)
self.radius = radius
def move_shape(shape: Shape, x: float, y: float):
shape.x = x
shape.y = y
def colour_shape(shape: Shape, colour: str):
shape.colour = colour
Generally my attitude is that a docstring exists to explain why something is written the way it is, not what it's doing. That should be clear from the code. So, if your Circle requires an x and y parameter for use in the parent class, then it should say as much in the signature. If the parent class has optional requirements, then **kwargs is sufficient in the child class and it's incumbent upon the developer to interrogate Circle and Shape to see what the options are.
The solution I would consider most reasonable (though I realize what I'm saying might not be canonical) is to repeat the parent-class parameters that are required, but leave the optional ones to **kwargs.
Benefits:
clean code that is easy for a human reader to understand,
keeps the type checkers happy,
repeats only the essential stuff,
supports all the optional parameters without repeating those.
I think the best way to do this is to use the **kwargs approach, but to also define a __signature__ attribute on the class. This is a typing.Signature object that describes the arguments that the class expects.
from typing import Signature
class Shape:
def __init__(self, x: float, y: float):
self.x = x
self.y = y
class Circle(Shape):
def __init__(self, radius: float, **kwargs):
super().__init__(**kwargs)
self.radius = radius
__signature__ = Signature(
parameters=[
Parameter('radius', Parameter.POSITIONAL_OR_KEYWORD, annotation=float),
Parameter('x', Parameter.POSITIONAL_OR_KEYWORD, annotation=float),
Parameter('y', Parameter.POSITIONAL_OR_KEYWORD, annotation=float),
]
)
This will allow type checkers to understand that radius is a required argument, and that x and y are optional.
TL;DR
In Python 3.10, dataclasses provide a clean solution
Dataclasses provides inheritance mechanisms with automatically generated cumulative constructors that provide everything needed for mypy or vscode to do the type checking, yet is completely DRY.
Starting with Python 3.10, dataclasses are suitable more often than one might think, so I contend they could be considered a canonical answer to the question here, although probably not for earlier versions of Python.
Details
My thought process
I've been doing some more thinking and research on this topic, partly inspired by other answers and comments here. I have also done some tests, and eventually convinced myself that the canonical answer should be to use #dataclass whenever possible (which is more often than one might think, at least with Python >= 3.10).
There will of course be cases that really cannot be cast to dataclasses, and then I'll say use my first answer for those.
Augmented example without dataclasses
Let me augment the example a bit to illustrate my idea.
class Shape:
def __init__(self, x: float, y: float, name="default name", verbose=False):
self.x = x
self.y = y
self.name = name
if verbose:
print("Initialized:", self)
def __repr__(self):
return f"{type(self).__name__}(x={self.x},y={self.y},name={self.name})"
class Circle(Shape):
def __init__(self, x: float, y: float, r: float, **kwargs):
self.r = r
super().__init__(x, y, **kwargs)
def __repr__(self):
return f"{type(self).__name__}(r={self.r},x={self.x},y={self.y},name={self.name})"
Here I've added the optional parameter name with a default value that gets stored, and the optional parameter verbose that affects what __init__ does without getting stored. Those add parameters to __init__ beyond just required data fields.
And I've already applied the solution I suggested in my first answer, which was to repeat the required arguments, but only the required arguments.
Solution using dataclasses in Python 3.10
Now, let's rewrite this with dataclasses:
from dataclasses import dataclass, InitVar, field
#dataclass
class Shape:
x: float
y: float
name: str = field(kw_only=True, default="default name")
verbose: InitVar[bool] = field(kw_only=True, default=False)
def __post_init__(self, verbose):
if verbose:
print("Initialized:", self)
#dataclass
class Circle(Shape):
r: float
Notice how much shorter the code is. name is still optional with a default value. verbose is still accepted as an initialization parameter that is not stored. I get my __repr__ for free. And it makes the constructor of Circle explicitly require x and y, as well as r, so both mypy and pylint (and presumably vscode too) do complain if any of them is missing. In fact, being automatically generated, the Circle constructor repeats everything in the Shape constructor, but I didn't have to write it, so that's perfectly DRY.
An inherited init-only parameter
Let's add another init-only parameter, scale, which has the effect of scaling everything in both classes, i.e., x, y and r are multiplied by scale. This case is a bit twisted, but it lets me require a __post_init__ in the subclass too, which I would like to illustrate.
from dataclasses import dataclass, InitVar, field
#dataclass
class DCShape:
x: float
y: float
scale: InitVar[float] = field(kw_only=True, default=1)
name: str = field(kw_only=True, default="default name")
verbose: InitVar[bool] = field(kw_only=True, default=False)
def __post_init__(self, scale, verbose):
self.x = scale * self.x
self.y = scale * self.y
if verbose:
print("Initialized:", self)
#dataclass
class DCCircle(DCShape):
r: float
def __post_init__(self, scale, verbose):
self.r = scale * self.r
super().__post_init__(scale, verbose)
This is a pretty decent solution too, in my opinion. I did have to repeat scale and verbose in both classes' __post_init__ functions, and the subclass's instance has to call the superclass's instance explicitly, but this is still something I'd be happy to use in real production code.
Why not with Python <= 3.9?
To make this clean, I had to use keyword-only fields, which were only introduced to dataclasses with Python 3.10.
With earlier version, I would have had to also give Circle.r a default value, and then presumably add custom code to make sure that default value wasn't use, which would mean mypy would not notice if r was missing, so I feel that kills that solution. Although for cases where the base class only has required field, dataclasses work well before 3.10 too.
References
The dataclasses module
__post_init__: other initialization code
InitVar: init only variables
I come from C++. In there this is OOP 101. But since I transitioned to Python, throughout my career I used this no matter how annoying it was to duplicate parent constructor arguments. Also I found the hard way it was very hard to debug with **kwargs with a large enough code base. So, based on my experience I will upvote #joanis answer highlighting the benefits
Benefits:
clean code that is easy for a human reader to understandkeeps the
type checkers happyrepeats only the essential stuff supports all
the optional parameters without repeating those.
Although if one needs **kwargs is an option, but as far as I know, type checking will still not work; i.e. a debugging nightmare. Python is after all, an interpreted language.
And if you analyse the problem a bit deeper, it makes sense. Python Object creation only looks at a Single Class name eg. r = Circle(x,y) . There is no way to identify what it is inherited from without looking at the class. Where In C++ we could write something like below, which is compiled, and then resolved at runtime. Python IDE's inform most of the error prior to execution. But I can see why its taken time to provide a solution to this particular problem since **kwargs essentially does not indicate any information that could be validated prior to a run.
#C++ code
#include <iostream>
using namespace std;
class Shape {
public:
string x = "";
};
class Circle: public Shape
{
public:
string x = "50";
};
int main(void) {
Shape r= Circle();
cout<<r.x;
}
Do I look forward to this in Python? Well absolutely, along with some proper Polymorphism like C++. But sadly as advertised these languages have very different mechanisms. To quote from here "Python does not have the concept of virtual function or interface like C++. However, Python provides an infrastructure that allows us to build something resembling interfaces’ functionality" (Not a very well reputed site, but I found this quote to be noteworthy)
on the same line a comment on this line in the problem statement;
Also, if the parent interface changes, I have to remember to change all of my child class's interfaces to match, type hints and all. Not very DRY.
I actually believe this is necessary! We should be careful not to change Parent classes after a bunch of code is written.
However, I recommend reading this "Learning Path Advanced Python Programming" by Dr.Gabriele Lanaro (Or any other book regarding Python Design Patterns) to know a bit about how you can avoid pretty much many pitfalls like this via Design Patterns.
Lastly, while this may probably not the complete and satisfactory answer you are looking for my suggestions are;
If the project is large enough, stick to duplicating class constructors without using **kwargs (The solution with signature was nice! I can't see how it can harm at scale too)
If parent class is library file or third party class consider using an Adapter Pattern.
Have look at Builder Pattern and something called Fluent Builder Pattern
Hope this helps!
I searched a little bit and this is what I found:
1.-There are lots of libraries that can do the job, like Makefun, or Merge_args, or Python Forge, they all work the same:
Using inspect and/or functools libraries and trying to merge their signatures (they get all parameters by e.g tuple(signature(your_function).parameters.values()), or getfullargspec(your_function.__init__) (remember to slice it) and then replacing the signature.
Since they have put a lot effort in that task, I'd recommend you to use them if you want a solid solution.
2.-I had the same problem long ago, and I ended up restating only the most important parameters, and leaving the rest with **kwargs. But there's a better tournaround if you want something complete without any library (but not so DRY by my part haha): just use print(signature(Shape.__init__)) (Remember to from inspect import signature) and copy what's useful to you :).
3.-I saw #cactusboat's answer and I came up to this too, hope it helps:
from inspect import signature, Signature
from itertools import chain
from collections import OrderedDict
def make_signature(subclass, superclass):
"""Returns a signature object for the subclass constructor's __signature__ attribute."""
sub_params = signature(superclass.__init__).parameters
sup_params = signature(subclass.__init__).parameters
mixed_params = OrderedDict(chain(sub_params.items(), sup_params.items()))
mixed_signature = Signature(parameters=mixed_params.values())
return mixed_signature
class Shape:
def __init__(self, x: float, y: float):
self.x = x
self.y = y
class Circle(Shape):
def __init__(self, radius: float, **kwargs):
super().__init__(**kwargs)
self.radius = radius
Circle.__signature__ = make_signature(Circle, Shape)
if __name__ == "__main__":
help(Circle) # Circle(self, x: float, y: float, radius: float, **kwargs) ...
4.-As you can see, there are many ways, but there isn't any canonical one, the closest one could be PEP 362, and it has an example on how to Visualize Callable Objects’ Signature. But it's hard, since you would be falling into the adapter pattern.
Hope it helps! I'm kinda new into programming, so I did my best to find the best that I could. Greetings!
I believe you have to answer this question (which calls for opinions) from the point of view that you intend your code to be used.
For you as a developer that know the classes that you are coding, **kwargs can save some lengthy copy/pasting and refactoring if a superclass is modified. Meanwhile, you can use the method that you like most and even mix usage because you have full control over your own code. In the end, this is a matter of convenience.
For dev-users that will use your set of classes as a library, they will expect a fully documented API and your own potential refactoring work do not matter to them. So in that case it will be more intuitive to them to have the full parameter list.
For lambda-users that will just use your code without knowing what is inside it, it does not matter at all, but it should work no matter what. A general rule of thumb is to catch as many errors in the IDE while you can still fix it, not at runtime when it is too late. In that regard, **kwargs is more sensitive and is more prone to lead to bad user experience.
I would suggest using Pydantic. This introduces a dependency which might be a deal breaker, but I think it might be what you are looking for.
Example:
from Pydantic import BaseModel
class ShapePydantic(BaseModel):
x: float
y: float
class CirclePydantic(ShapePydantic):
radius:float
Type hint:
Worth noting that Pydantic allows extra inputs extra fields (or arguments) by default, but this can be turned off by using extra=Extra.forbid.
from pydantic import Extra
class CirclePydantic(ShapePydantic, extra=Extra.forbid):
radius:float
It's a bit tricky and depends on how you intend for your class to be used. For example if you allow usage of positional arguments then you get situation that you describe and basically break the expected order of arguments.
My opinion is that if you use **kwargs then it's better to prohibit use of positional arguments at all (notice asterisk before arguments):
class Shape:
def __init__(self, *, x: float, y: float):
self.x = x
self.y = y
class Circle(Shape):
def __init__(self, *, radius: float, **kwargs):
super().__init__(**kwargs)
self.radius = radius
This solves the issue of unexpected order but does not help the end user.
For that I would suggest to use stubs. It's still can be considered a code duplication, although it can be generated for you (granted you have not too complicated code) and if needed can be tweaked manually. Besides that, it allows developer to provide better type annotations in complicated situations.
That way, you can actually even use any variant you like as a developer, as long as stubs match your implementation, even support overloaded initializers and IDE will show to the user which signature is applicable to their arguments.
Still, I would suggest not to mix named positional arguments and **kwargs unless there is a good clear reason for it (like generic decorators or some kind of proxy). Especially complicated things become when using *args, **kwargs combo, since client now can unknowingly pass in the same argument twice (if there are no type annotations/stubs). This forces you to handle such cases and write complicated "parsing" of arguments. Such an approach can be justified in a large and complicated interface and considered better, since, in a way, provides more flexibility, but for a small interface it would be an overkill and pain.
If using *args, **kwargs then stub file is a must in my opinion.
This is probably not be the best practice, but if you want to avoid using the docstrings and want the IDE (type checker) to find these errors, you can use this implementation.
class Shape:
def __init__(self, x: float, y: float):
self.x = x
self.y = y
class Circle:
def __init__(self, radius: float, shape: Shape):
self.shape = shape
self.radius = radius

Getter/setter but for nested attributes?

Suppose I have two classes. The simple Square class:
class Square:
def __init__(self, side):
self.side = side
And the slightly more complex MyClass class:
class MyClass:
def __init__(self, square=None):
if square is None:
self.square = Square()
else:
self.square = square
self.rounded_side = round(self.square.side)
I instantiate a MyClass object like so:
myObj = MyClass()
In this situation, how can one achieve the following behavior?
Changing myObj.rounded_side to X, automatically changes myObj.square.side also to X.
Changing myObj.square.side to X, automatically changes myObj.rounded_side to round(X).
If possible, in a way that doesn't require any modifications to the Square class (this is a simplified version of the problem I'm currently facing; in the original version, I don't have access to the code for Square).
What I tried so far:
My first attempt was to transform rounded_side into a property. That makes it possible to obtain behavior 1. However, I fail to see how I can transform square also into a property, in a way that makes it possible to obtain behavior 2.
I also thought about making MyClass inherit from Square, so that both attributes are in the same depth, but then I'd lose some the desired structure of my class (I rather have the user access myObj.square.side, than myObj.side)
If someone is interested, the actual problem I'm facing:
I'm writing a game in pygame, and my Player class has an attribute for its position, which is an array with two floats (2D position). This is used for determining where the player is, and for deciding how to update it's position in the next update step of the game's physics.
However, I also want to have a Rect attribute in the Player class (which holds the information about a rectangle around the player's image), to be used when displaying the player in the screen, and when inferring collisions. The Rect class uses integers to store the position of the rectangle (pixel coordinates).
So as to be able to store the position information of the player in a float, but also use the Rect class for convenience, I thought about having this dependency between them, where changing one alters also the other accordingly.
As you've said, make rounded_side a property, but have it access the value in self.square for both getting and setting.
#property
def rounded_side(self):
return self.square.side
#rounded_side.setter
def rounded_side(self, side):
self.square.side = side
Now, setting rounded_side will use the setter which sets the value in square; setting the value on square directly will mean that it would be looked up from there by the property getter.

Python subclass loses subclass attributes

This is question based on the shapely package but I think it's a more general question.
Basically I have two classes coming from shapely. One is called Point and the other is called MultiPoint. You instantiate Point with a tuple of coordinates and MultiPoint with a list of Point. You can access the points in multipoint using indexing.
p1 = Point((1,1))
p2 = Point((2,2))
mp = MultiPoint([p1,p2])
In [315]: MultiPoint((p1, p2))[0]
Out[315]: <shapely.geometry.point.Point at 0x1049b1f50>
I want to subclass Point and use it for the location of a car.
class Car(Point):
def __init__(self, coords, speed):
super(self.__class__, self).__init__(coords)
self.speed = speed
Now I can write
c1 = Car((1,1), speed=2)
c2 = Car((2,2), speed=3)
mc = MultiPoint([c1,c2])
I can access the cars using indexing but they are no longer cars. They are points. In particular they have no attribute speed.
In [316]: MultiPoint((c1, c2))[0]
Out[316]: <shapely.geometry.point.Point at 0x1049b1610>
In [317]: MultiPoint((c1, c2))[0].speed
AttributeError: 'Point' object has no attribute 'speed'
Is there a way of fixing this by subclassing Multipoint? I guess I don't know what happens to cars (points) when they are passed to MultiPoint.
Edit: I made a mistake typing the code example for c1 and c2. It's fixed now. I wasn't instantiating points, I was instantiating cars.
While the MultiPoint class is conceptually a collection of Points, it's not actually keeping a reference to the Point instance (or instances of a Point subclass) that you pass in to its constructor. Rather, it copies the coordinates from the point into its own internal data structure. This loses the extra attributes you've added to your Car subclass. Here's the brief passage in the docs:
The constructor also accepts another MultiPoint instance or an unordered sequence of Point instances, thereby making copies.
I'd suggest designing your Car class to have an attribute that is a Point (or maybe a reference to a MultiPoint and an index), rather than making it be a subclass of Point. Inheriting from another conceptually unrelated type is usually a bad idea when you can use encapsulation instead.
It can help clarify your design to remember that inheritance means "IS-A", while encapsulation means "HAS-A". For instance, a rectangle IS-A shape, so if you write a Shape class, a Rectangle would be a perfectly reasonable subclass. In your case, your inheritance suggests that a Car IS-A Point, which doesn't make much sense. It would make more sense to say that a Car HAS-A position (which is a Point).
You should instantiate the Car objects with the Car class constructor:
c1 = Car((1,1), speed=2)
c2 = Car((2,2), speed=3)
mc = MultiPoint([c1,c2])
Python follows the duck typing principle:
If it quacks like a duck and looks like a duck, then it must be a
duck.
This makes Polymorphism very easy. Your objects are not any particular class, they behave according to their superclasses. This behaviour can be different in different circumstances.
Your MultiPoint class will glady accept Car objects in place of Point objects.
I'm curious about your Point class, could you post it?
The problem is here:
c1 = Point((1,1), speed=2)
c2 = Point((2,2), speed=3)
mc = MultiPoint([c1,c2])
According to the description you had to instantiate Car objects, not Points:
c1 = Car((1,1), 2)
c2 = Car((2,2), 3)

Class Segment Representing a Line Segment in a Plane?

The problem we were given is to create a class Segment that represents a line segment in the plane and supports these methods:
__init__(): constructor that takes as input a pair of Point objects that represent the endpoints of the line segment;
length(): returns the length of the segment;
slope() returns the slope of the segment or None if the slope is unbounded.
>>> p1 = Point(3,4)
>>> p2 = Point()
>>> s = Segment (p1, p2)
>>> s.length()
5.0
>>> s.slope()
0.75
I am confused on how I would get it to print the slope as well as the length. I feel like I would know how to do this if I just had to create a function that returned these results, but being able to transform these into classes is what is really confusing me.
If you are new to object-oriented programming, I may throw around a few terms with which you are not familiar: if that's the case, I suggest you look up any words you don't know ;)
The difference between a method and a regular function is the fact that the method is "owned" by the class. A method is basically a function which is only performed on (usually objects of) that class. In python, you declare methods by putting them in the class body, indented once, like so:
class MyClass:
def myMethod(self):
...
One thing to keep in mind is the keyword self - it refers to the instance of that class, i.e. the particular object. You don't pass it as an argument in parentheses, though - it's that s before the . in your example. When you instantiated s by doing s = Segment(p1, p2), you created an instance of the class Segment called s. You must pass self as the first argument to your class methods; how else are you supposed to know upon which object to operate?
__init__() is a special method which defines how to create a new object of that class. You call it by typing the class name, followed by your arguments in parentheses as if the class name were a method. In your example that's Point(3, 4) and Segment(p1, p2).
Normally, you might type s.p1 to get the first point of a Segment, but in your method bodies, you don't know what s is, since the classes are declared before s is instantiated. You do, however, know to pass self, so you would type self.p1 to get the p1 of whichever Segment to which you are referring.
Do you already have the Point class defined somewhere? Look at its code for examples.
If you want some guided exercises, I suggest you check out codeacademy. It's very elementary but it will get the job done.

Replacing variable with function/class indicating dynamic value

In my program, I draw some quads. I want to add the functionality for them to scale up, then down, then go back to being static (to draw attention). In the quads I have:
self.scale = 10
Making scale change according to sin would be nice. But adding frequency, amplitude and logic to my already bloated quad class is something I take as a challenge to avoid.
Something like this:
class mysin:
def __init__(self):
self.tick = 0.0
self.freq = 1.0
self.ampl = 1.0
def update(self, amount):
self.tick += amount
def value(self):
return math.sin(self.tick)
That class would also add itself to the logic system (getting update calls every frame). I would then do:
quad.scale = 10 # for static quad
quad.scale = mysin() # for cool scaling quad
The problem is that some calculations expect scale to hold a value. I could of course add another class where value() returns a (previously saved) constant value and adapt all the calculations.
What I want to know now is... does this have a name, is it a valid technique? I read the wiki article on functional programming and this idea sprung to mind as a wacky implementation (although Im not sure it qualifies as FP). I could very well have been driven mad by that article. Put me back in line fellow coders.
The distinction between
quad.scale= 10
and
quad.scale= MySin()
Is minor. Within the Quad class definition the "scale" attribute can be a property with proper getter and setter functions.
class Quad( object ):
#property
def scale( self ):
return self._scale
#scale.setter
def set_scale( self, value ):
# handle numeric and MySin() values appropriately.
Alternate version with the explicit property function (which I prefer).
class Quad( object ):
def get_scale( self ):
return self._scale
def set_scale( self, value )
# Handle numeric and MySin() values
scale = property( get_scale, set_scale )
Any other class should NOT know or care what type of value scale has. If some client does this
quad.scale * 2
Then you have design issues. You haven't properly encapsulated your design and Quad's client classes are too friendly with Quad.
If you absolutely must do this -- because you can't write a method function of Quad to encapsulate this -- then you have to make MySin a proper numeric class so it can respond to quad.scale * 2 requests properly.
It sounds like you want your quads to be dumb, and to have an animator class which is smart. So,here are some suggestions:
Give the quads an attribute which indicates how to animate them (in addition to the scale and whatever else).
In an Animator class, on a frame update, iterate over your quads and decide how to treat each one, based on that attribute.
In the treatment of a quad, update the scale property of each dynamically changing quad to the appropriate float value. For static quads it never changes, for dynamic ones it changes based on any algorithm you like.
One advantage this approach is that it allows you to vary different attributes (scale, opacity, fill colour ... you name it) while keeping the logic in the animator.
It's sort of like lazy-evaluation. It is definitely a valid tecnique when used properly, but I don't think this is the right place to use it. It makes the code kind of confusing.
It sure is a valid technique, but a name? Having an object.value() instead of an int? Uhm. Object orientation? :)
If the methods that use this value requires an integer, and won't call any method on it, you could in fact create your own integer class, that behaves exactly like an integer, but changes the value.

Categories

Resources