Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 months ago.
Improve this question
So I am working on an implementation of a Dynamic Decision Network (DDN) class.
This DDN is a type of Bayesian Network (BN) with some additional functionality.
However, I also defined a Quantum Bayesian Network (QBN), that uses quantum computing to perform inference. Its implementation is very similar to the BN class, I only change one of the BN methods to perform quantum computations instead (the query method) and add some other methods just to help perform these computations.
I want the DDN class to be able to inherit from the BN class if one wants to perform the computations classically, and inherit from the QBN class if one wants to perform the computations in a quantum manner.
The code looks something like this:
class BN:
...
def query(self, ...):
...
class QBN(BN):
...
def query(self, ...):
# Modified query method
...
class DDN(???):
...
Am I structuring the hierarchy wrong in some way? How can I reconcile this?
You can put the class into a factory function:
def make_DDN(Base):
class DDN(Base):
def other_methods(self):
...
return DDN
Now you can make new classes:
DDN1 = make_DDN(BN)
etc
The working field of your program is unclear to me, and it is unclear what objects and their methods should be able to do, so this is a suggestion, not an answer. But it's too big for a comment.
class BN:
def method(self):
return self.data*2
class QBN(BN):
def method(self,classic=False):
if classic:
return super().method()
return self.data*3
class DDN(QBN):
def __init__(self,data):
self.data=data
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Below two variants to initialize a class instance variable. What is the best practice for initializing an instance variable in a class in python and why (maybe none of the suggested variants)?
Assumption: variant a because it might be more explicit?
class Example():
def __init__(self, parameter):
# EITHER
# variant a to initialize var_1
self.var_1 = self.initialize_var_1_variant_a(parameter)
# OR
# variant b to initialize var_1
self.initialize_var_1_variant_b(parameter)
# OR something else
# ...
def initialize_var_1_variant_a(self, parameter):
# complex calculations, var_1 = f(parameter)
result_of_complex_calculations = 123
return result_of_complex_calculations
def initialize_var_1_variant_b(self, parameter):
# complex calculations, var_1 = f(parameter)
result_of_complex_calculations = 123
self.var_1 = result_of_complex_calculations
example_instance = Example("some_parameter")
print(example_instance.var_1)
Variant A is the common way to do this. It is very nice to be able to see all of the class members by looking at __init__, instead of having to dive into the other functions (initialize_var_1_variant_b) to find out exactly what attributes are set.
In general, all member attributes that a class will ever have should be initialized in __init__.
To come at it from another angle, initialize_var_1_variant_a should do as little as possible. Calculating the value of var_1 and saving it as a class attribute are two tasks that can be easily broken apart.
It also opens up the possibility of moving initialize_var_1_variant_a outside of the class itself, so it could be re-used by other parts of your program down the line.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I invoke a python class in my python file. The python class I invoked will load big data from disk.
# invoke python class
import mypythonclass
def method():
for i in range(100):
mypythonclass.dosomething(params)
# code in mypythonclass
def dosomething(params):
# load data here
# do something
My question is how can I avoid load data repetitively in mypythonclass, thanks.
If the data you are loading will be used by the class instance, you may initialize the data when instantiating the class.
For example:
class MyPythonClass(object):
def __init__(self, data):
self.data = data
def dosomething(self, params):
# use self.data
If possible you should move the class instantiation out of the for loop, so the data is loaded only once. But the example you provided, doesn't have sufficient details, so it is know exactly what you want.
If the data does not change across different instances of the class, you can use class attributes and make dosomething a class method.
class MyPythonClass(object):
data = some_data
#classmethod
def dosomething(cls, params):
# use cls.data
In this case, the data will be available by all the instances of the class.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Say I have a class BigObject, which contains within it a number of SmallObjects - and this is the only place where SmallObject is used.
class SmallObject:
pass
class BigObject:
def __init__(self):
self.objects = [SmallObject() for _ in range(10)]
This is all fine, until I want a second version of these two, which inherits some behaviour and overrides some; inheritance seems natural, except:
class NewSmallObject(SmallObject):
pass
class NewBigObject(BigObject):
def __init__(self):
super().__init__()
self.objects = [NewSmallObject for _ in range(10)]
We had to create a bunch of SmallObjects only to immediately override them with NewSmallObjects. Which is not great if e.g. SmallObjects are expensive to create. Also, if we change how the list of SmallObjects is created in BigObject, those changes don't get passed on to NewBigObject.
The solution I came up with was to use nested classes:
class BigObject:
class SmallObject:
pass
def __init__(self):
self.objects = [self.SmallObject() for _ in range(10)]
class NewBigObject(BigObject):
class SmallObject(BigObject.SmallObject):
pass
This deals with both the issues described above. My main concern is that when I looked on StackOverflow for questions about nested classes in Python people keep saying nested classes are unpythonic, and I'd like to understand why. It can also create quite deeply nested classes if SmallObject contains TinyObjects which contain MinisculeObjects etc, which may be the answer?
So my question is basically:
is this a "good" solution to this problem?
if not, what would a good alternative be?
The solution is, as you've already found, to make SmallObject an attribute of the BigObject class.
There is nothing inherently wrong with using a nested class for this, but the readability of your code may suffer if the nested class is very long. Generally speaking, I would recommend to define SmallObject in the global scope though. After all, the Zen of Python says "Flat is better than nested". If you keep nesting TinyObjects and MinisculeObjects, your code will quickly become unreadable:
class BigObject:
class SmallObject:
class TinyObject:
class MinisculeObject:
... # MinisculeObject class body
... # TinyObject class body
... # SmallObject class body
... # BigObject class body
Defining your classes in the global scope only requires minimal extra effort, and looks much cleaner:
class MinisculeObject:
... # MinisculeObject class body
class TinyObject:
miniscule_object_factory = MinisculeObject
... # TinyObject class body
class SmallObject:
tiny_object_factory = TinyObject
... # SmallObject class body
class BigObject:
small_object_factory = SmallObject
... # BigObject class body
This looks like a good use case for the Abstract Factory pattern.
The gist is that there is a class hierarchy for creating things that derive from SmallObject. That way subclasses of BigObject will all use the same interface to get their SmallObject instances.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I want to create a service class that just has one instance, so should I make that class a singleton, or should I make the methods as classmethods?
class PromoService():
#classmethod
def create_promo(cls, promotion):
#do stuff
return promo
class DiscountPromoService(PromoService):
#classmethod
def create_promo(cls, promo, discount):
promo = super(DiscountPromoService, cls).create_promo(promo)
promo.discount = discount
promo.save()
return promo
The reason I don't want to create it as a module is because I would need to subclass my service. What is the most pythonic way to do this, the above-mentioned way or to make a singleton class?
Short answer: In my opinion it would work.
BUT, In pure pattern's sense, I have been wrestling with this question for a while:
Do python class methods and class attributes essentially behave like a Singleton?
All instances of that class have no bearing on them
Only class itself have access to them
There is always one of them
Yes, pure Singleton Pattern comparison would fail plain and simple but surely its not far off?
Wouldn't call myself a python expert, so happy to know views on this be corrected on my assumptions.
If you want a singleton, go with a singleton. The pattern referenced here works well. You would simply need to do something like:
class PromoService():
__metaclass__ = Singleton
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I need to add an object, e.g. Person, to a list that is given as a parameter. This is an example of the code
class Person:
def __init__(self, age):
self.age = age
def add_to_list(self, list_of_persons):
list_of_persons.append(self)
I know this situation may be caused by bad code design, however, it is working on a small test that I implemented. My question is, will this design cause issues in future? If yes, which ones?
I don't like your solution for various reasons, which include:
Elements should not be responsible for their membership in a container. It should be the container responsibility to add and remove then.
You generally don't expect methods to only modify the parameter. Methods are generally used to modify the instance, possibly, with some side-effects on the parameters, but I wouldn't expect a method to only change the parameter.
The method doesn't add any real functionality except that it hides how Person is inserted into a list. If you follow this design why didn't you, for example, add a method to remove a Person from the list?
I find:
person.add_to_list(people)
Quite unreadable.
If you think you aren't going to change the container for the Person, i.e. you will always use a list instead, why don't you simply use a list directly? It's simpler, has less overhead and makes you write less code:
people = []
people.append(Person(18))
If you think that you are probably going to change the container used, then I believe it's better to write a People class with an add and a remove method (and whatever else you need). Behind the scenes People can use whatever container it wants to implement these methods. In this way you get a readable, intuitive and robust design:
class People(object):
def __init__(self, people=()):
self._members = list(people)
# add other methods/special methods like __len__ etc.
def add(self, person):
self._members.append(person)
def remove(self, person):
self._members.remove(person)
people = People()
people.add(john)
people.add(jack)
# Change list to set?
class People(object):
def __init__(self, people=()):
self._members = set(people)
def add(self, person):
self._members.add(person)
def remove(self, person):
self._members.remove(person)
I can't really think of a situation where you couldn't replace a call to your person.add_to_list(list) with a list.append(person) call.
The problem here is that you are adding a dependency between your class Person and the list object that hurts encapsulation. If later you want to replace the list with another data structure that doesn't feature the "append" method (such as a custom container), you'll have to add another method Person.add_to_container to retain consistency, or add a append method to your other data structure (if you can and if this makes sense).