So I have in my project two classes: Circuit and SubCircuit. At some point I may need to construct a SubCircuit from a Circuit. Is there a more elegant way to do that than what's done below in the last 4 lines? Circuit may get some new attributes at some point for example, which would mean that conversion would also need to be updated - basically inviting bugs.
class Gate(abc.ABC):
def __init__(self, size):
self.in_ports = (InPort(self),) * size
self.out_ports = (OutPort(self),) * size
class Circuit:
def __init__(self, size):
self.input = (OutPort(self),) * size
self.output = (InPort(self),) * size
self.gates = []
# ... some other stuff that messes around with these attributes
class SubCircuit(Gate, Circuit):
def __init__(self, circuit=None):
Gate.__init__(self, size)
Circuit.__init__(self, size)
if circuit is not None:
self.gates = circuit.gates
self.input = circuit.input
self.output = circuit.output
Bugs are already there - when you do self.gates = circuit.gates, circuit.gates being a list, yu point both references to the same list - and if this list is updated on the original circuit, this update will be reflected in your subcircuit instance.
I think the most sane pattern tehre is to have an alternate constructor for the class if you have a circuit instance from which to update your own:
from copy import copy
class SubCircuit(Gate, Circuit):
def __init__(self, size):
Gate.__init__(self, size)
Circuit.__init__(self, size)
#classmethod
def from_circuit(cls , circuit, size):
self = SubCircuit(size)
for key, value in circuit.__dict__.items():
setattr(self, key, copy(value))
return self
One "right" thing to do is to make the classes __init__ and other methods calling each other through the collaborative use of super() instead of calling then explicitly by name - however, if your classes and subclasses are fixed to these 3, that may be a bit overkill, because Python's objects not handling extra parameters passed to its own __init__ method. (So, you'd have to verify in your base classes __init__ if they are the last ones prior to object on the Method Resolution Order, and swallow the remaining args)
Related
I'm sure this will be a duplicate question, but I can't seem to find the words to locate one.
I have a set of very similar models I'd like to code up. The models are all the same, apart from a single function / line of code. I'd like to avoid any code repetition. Let' see an MWE:
import numpy as np
class SinModel:
def __init__(self):
self.x = np.linspace(-np.pi, np.pi)
def run(self):
# Computations which are invariant of the function we use later
self.y = np.sin(self.x)
# More computations which are invariant of which funcion was used
Our second model will involve the same series of computations, but will use a different function mid way though (here, cosine instead of sine):
class CosModel:
def __init__(self):
self.x = np.linspace(-np.pi, np.pi)
def run(self):
# Computations which are the same as in SinModel
self.y = np.cos(self.x)
# More computations which are the same as in SinModel
Here I have lots of code repetition. Is there a better way to implement these models? I was hoping it would be possible to create a class Model which could inherit the differing function from an arbitrary class.
An important note is that the function which changes between models may take different arguments from self depending on the model.
The words you're looking for are inheritance (allowing a class to inherit and extends / specialize a parent class) and the "template method" design pattern (which is possibly the most common design pattern - the one everyone discovers by itself long before reading about design patterns).
Expanding on your MWE:
import numpy as np
class ModelBase(object):
def __init__(self):
self.x = np.linspace(-np.pi, np.pi)
def run(self):
# Computations which are invariant of the function we use later
self.y = self.compute_y()
# More computations which are invariant of which funcion was used
def compute_y(self):
raise NotImplementedError("class {} must implement compute_y()".format(type(self).__name__))
class SinModel(ModelBase):
def compute_y(self):
return np.sin(self.x)
class CosModel(ModelBase):
def compute_y(self):
return np.cos(self.x)
This being said, creating instance attributes outside the initializer (the __init__ method) is considered bad practice - an object should be fully initialized (have all it's attributes defined) when the initializer returns, so it might be better to move the self.y = self.compute_y() line to the initializer if possible, or, if self.y always only depends on self.x, make it a computed attribute:
class ModelBase(object):
def __init__(self):
self.x = np.linspace(-np.pi, np.pi)
#property
def y(self):
return self._compute_y()
def _compute_y(self):
raise NotImplementedError("class {} must implement _compute_y()".format(type(self).__name__))
def run(self):
# Computations which are invariant of the function we use later
# no need to explicitely set self.y here, just use `self.y`
# and it will delegate to self._compute_y()
#(you can't set it anymore anyway since we made it a readonly propery)
# More computations which are invariant of which funcion was used
class SinModel(ModelBase):
def _compute_y(self):
return np.sin(self.x)
class CosModel(ModelBase):
def _compute_y(self):
return np.cos(self.x)
Also at this point you don't necessarily need subclasses anymore, at least if that's the only thing that changes - you can just pass the proper function as a callback to your model class ie:
class Model(object):
def __init__(self, compute_y):
self.x = np.linspace(-np.pi, np.pi)
self._compute_y = compute_y
#property
def y(self):
return self._compute_y(self)
def run(self):
# code here
cos_model = Model(lambda obj: np.cos(obj.x))
cos_model.run()
sin_model = Model(lambda obj: np.sin(obj.x))
sin_model.run()
Yes, and there's even a name for it: Inheritance is the idea that child classes can "inherit" behaviors and attributes from parent classes, and Polymorphism is the idea that two child classes, sharing similar behavior, can have different implementations of the same method - so that you can call a method on an object without knowing explicitly what type it is, and still have it do the right thing.
Here's how you'd do that in python:
class TrigModel:
def __init__(self):
self.x = np.linspace(-np.pi, np.pi)
def run(self):
raise NotImplementedError("Use subclasses SinModel or CosModel")
class SinModel(TrigModel):
#override
def run(self):
self.y = np.sin(self.x)
class CosModel(TrigModel):
#override
def run(self):
self.y = np.cos(self.x)
Unless you explicitly specify otherwise (by declaring a method like run() that overrides the parent class's method of the same name), SinModel and CosModel will call TrigModel's methods on themselves (in this case, they both call TrigModel's constructor, but then display different behavior when you call run() on them).
If you then do:
model.run()
then model will behave differently depending on whether it's a SinModel or a CosModel, depending on what you set it to beforehand.
The #override decorator isn't strictly necessary, but it's good practice to lessen ambiguity.
I currently have the following two ways:
class Venue:
store = Database.store()
ids = [vid for vid in store.find(Venue.id, Venue.type == "Y")]
def __init__(self):
self.a = 1
self.b = 2
OR
class Venue:
#classmethod
def set_venue_ids(cls):
store = Database.store()
cls.ids = [vid for vid in store.find(Venue.id, Venue.type == "Y")]
def __init__(self):
self.a = 1
self.b = 2
And before using/instantiating the class I would call:
Venue.set_venue_ids()
What would be the correct way of achieving this?
If it's the first way, what would I do if the instantiation of the attribute required more complex logic that could be done more simply through the use of a function?
Or is there an entirely different way to structure my code to accomplish what I'm trying to do?
From a purely technical POV, a class is an instance of its metaclass so the metaclass initializer is an obvious candidate for class attributes initialization (at least when you have anything a bit complex).
Now given the canonical lifetime of a class object (usually the whole process), I would definitly not use an attribute here - if anyone adds or removes venues from your database while your process is running, your ids attributes will get out of sync. Why don't you use a classmethod instead to make sure your data are always have up to date ?
Oh and yes, another way to construct your Venue.ids (or any other class attribute requiring non-trivial code) without having complex code at the class top-level polluthing the class namespace (did you noticed that in your first example store becomes a class attributes too, as well as vid if using Python 2.x ?) is to put the code in a plain function and call that function from within your class statement's body, ie:
def list_venue_ids():
store = Database.store()
# I assume `store.find()` returns some iterator (not a `list`)
# if it does return a list, you could just
# `return store.find(...)`.
return list(store.find(Venue.id, Venue.type == "Y"))
class Venue(object):
ids = list_venue_ids()
def __init__(self):
self.a = 1
self.b = 2
I have a class with many defaults since I can't function overload. Is there a better way than having multiple default arguments, or using kwargs?
I thought about passing a dictionary in to my class but then how do I control whether or not the necessary arguments will be passed in?
If there's a more pythonic way to have arguments be allowed without defining them all as defaults?
For example, i'm allowing many defaults:
class Editor:
def __init__(self,
ffmpeg: str,
font_file=None,
font_size=148,
font_color="white",
title_length=5,
title_x="(w-text_w)/2",
title_y="(h-text_h)/2",
box=1,
box_color="black",
box_opacity=0.5,
box_border_width=25):
self.ffmpeg = ffmpeg
self.title s= define_title(
font_file, font_size, font_color, title_length)
self.box = define_box(
box, box_color, box_opacity, box_border_width},
self.coordinates = {"x": title_x, "y": title_y}
Where in other languages I might overload the constructor.
You can specify default attributes on the class object itself if you expect them to be pretty standard, which can then be obscured by re-assigning those attributes on the instance when desired. You can also create multiple constructors using #classmethod, if you want different groups of arguments to be passed in.
class Editor(object):
font_file = None
font_size = 148
def __init__(self, ffmpeg=str):
self.ffmpeg = ffmpeg
#classmethod
def with_box(cls, ffmpeg=str, box=1):
new_editor = cls(ffmpeg)
new_editor.box = box
return new_editor
And then you would call:
Editor.with_box(ffmpeg=int, box=2)
and get back a properly box-initialized Editor instance.
One way is to use method chaining.
You can start with some class setting all (or most) parameters to defaults:
class Kls(object):
def __init__(self):
self._var0 = 0
self._var1 = 1
Then you add methods for setting each needed one:
def set_var_0(self, val):
self._var0 = val
return self
def set_var_1(self, val):
self._var1 = val
return self
Now, when you create an object, you can just set the necessary ones:
c = Kls()
c = Kls().set_var_1(2)
c = Kls().set_var_1(22).set_var_0(333)
Of course, you are not restricted to writing a method that sets a single parameter each time, you could write methods that set small "families" of parameters. These already might have regular default values, if needed.
I have a base class with two subclasses, say Car with subclasses HeavyCar and LightCar. Is it possible to have the creation of a base class return an object of a subclass dependent of a variable? Like this:
weightA = 2000
myAcar = Car(weigthA) # myAcar is now of type HeavyCar
weightB = 500
myBcar = Car(weightB) # myBcar is now of type LightCar
I know that the normal way to do this would be to look at the weight variable and then decide what type of object I want and then create that specific object. However I would like to leave it up to my class to decide which type it should be and not have to bother about that outside the class, i.e not have to look at the variable weight at all.
You can override __new__ to make it return the desired type. However, it would just be simpler to define a function that does the same, as it would be less prone to errors.
using __new__
class Car(object):
def __init__(self, weight):
self.weight = weight
def __new__(cls, weight):
if cls is Car:
if weight > 1000:
return object.__new__(HeavyCar)
else:
return object.__new__(LightCar)
else:
return object.__new__(cls)
class LightCar(Car):
def __init__(self, weight):
super(LightCar, self).__init__(weight)
self.speed = "fast"
class HeavyCar(Car):
pass
assert isinstance(Car(10), LightCar)
assert Car(10).weight == 10 # Car.__init__ invoked
assert hasattr(Car(10), "speed") # LightCar.__init__ invoked as well
assert isinstance(Car(1001), HeavyCar)
assert Car(1001).weight == 1001 # Car.__init__ invoked
Using a function
def create_car(weight):
if weight > 1000:
return HeavyCar(weight)
else:
return LightCar(weight)
assert isinstance(create_car(10), LightCar)
Even if it's somehow possible to do this, it's not really a very sane object design. Things can be too dynamic at some point. The sane solution would simply be a factory function:
class LightCar(Car):
maxWeight = 500
def __init__(self, weight):
assert(weight <= self.maxWeight)
self._weight = weight
# analogous for HeavyCar
def new_car(weight):
if weight <= LightCar.maxWeight:
return LightCar(weight)
..
From the point of view of the consumer, it makes little difference:
import cars
car = cars.new_car(450)
print(type(car))
Whether you write cars.new_car(450) or cars.Car(450) hardly makes any difference, except that the former is a dedicated factory function which does exactly what you're wanting: it returns a LightCar or HeavyCar depending on the weight.
It may be possible by overwriting the __new__ method, but that would be complex and take a long time.
... Just use a function or have a child class that uses multiple inheritance to contain light car and heavy car. If you make another child class you will have to deal with method resolution conflicts.
def car(weight):
if weight > 2:
return HeavyCar(weight)
return LightCar(weight)
__new__
http://rafekettler.com/magicmethods.html
https://www.python.org/download/releases/2.2/descrintro/#new
mro
http://www.phyast.pitt.edu/~micheles/python/meta2.html
Method Resolution Order (MRO) in new style Python classes
Here is what I have so far:
class Die (object):
def __init__(self,sides):
self.sides = sides
def roll(self):
return random.randint(1,self.sides)
def __add__(self,other):
return Dice(self,other)
def __unicode__(self):
return "1d%d" % (self.sides)
def __str__(self):
return unicode(self).encode('utf-8')
class Dice (object):
def __init__(self, num_dice, sides):
self.die_list = [Die(sides)]*num_dice
def __init__(self, *dice):
self.die_list = dice
def roll(self):
return reduce(lambda x, y: x.roll() + y.roll(), self.die_list)
But when I try to do Dice(3,6) and subsequently call the roll action it says it can't because 'int' object has no attribute 'roll'. That means it's going into the varargs constructor first. What can do I do here to make this work, or is there another alternative?
As you observed in your question, the varargs constructor is being invoked. This is because the second definition of Dice.__init__ is overriding, not overloading, the first.
Python doesn't support method overloading, so you have at least two choices at hand.
Define only the varargs constructor. Inspect the length of the argument list and the types of the first few elements to determine what logic to run. Effectively, you would merge the two constructors into one.
Convert one of the constructors into a static factory method. For example, you could delete the first constructor, keep the varargs one, and then define a new factory method.
I prefer the second method, which allows you to cleanly separate your logic. You can also choose a more descriptive name for your factory method; from_n_sided_dice is more informative than just Dice:
#staticmethod
def from_n_sided_dice(num_dice, sides):
return Dice([Die(sides)] * num_dice)
Side note: Is this really what you want? [Die(sides)] * num_dice returns a list with multiple references to the same Die object. Rather, you might want [Die(sides) for _ in range(num_dice)].
EDIT: You can emulate method overloading (via dynamic dispatch, not static dispatch as you may be used to, but static types do not exist in Python) with function decorators. You may have to engineer your own solution to support *args and **kwargs, and having separate methods with more precise names is still often a better solution.
What you want to have is a single __init__ method, that is defined along these lines:
class Dice (object):
def __init__(self, *args):
if not isinstance(args[0], Die):
self.die_list = [Die(args[0]) for _ in range(args[1])]
else:
self.die_list = args
def roll(self):
return sum(x.roll() for x in self.die_list)