Python add object to list inside same object method [closed] - python

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I need to add an object, e.g. Person, to a list that is given as a parameter. This is an example of the code
class Person:
def __init__(self, age):
self.age = age
def add_to_list(self, list_of_persons):
list_of_persons.append(self)
I know this situation may be caused by bad code design, however, it is working on a small test that I implemented. My question is, will this design cause issues in future? If yes, which ones?

I don't like your solution for various reasons, which include:
Elements should not be responsible for their membership in a container. It should be the container responsibility to add and remove then.
You generally don't expect methods to only modify the parameter. Methods are generally used to modify the instance, possibly, with some side-effects on the parameters, but I wouldn't expect a method to only change the parameter.
The method doesn't add any real functionality except that it hides how Person is inserted into a list. If you follow this design why didn't you, for example, add a method to remove a Person from the list?
I find:
person.add_to_list(people)
Quite unreadable.
If you think you aren't going to change the container for the Person, i.e. you will always use a list instead, why don't you simply use a list directly? It's simpler, has less overhead and makes you write less code:
people = []
people.append(Person(18))
If you think that you are probably going to change the container used, then I believe it's better to write a People class with an add and a remove method (and whatever else you need). Behind the scenes People can use whatever container it wants to implement these methods. In this way you get a readable, intuitive and robust design:
class People(object):
def __init__(self, people=()):
self._members = list(people)
# add other methods/special methods like __len__ etc.
def add(self, person):
self._members.append(person)
def remove(self, person):
self._members.remove(person)
people = People()
people.add(john)
people.add(jack)
# Change list to set?
class People(object):
def __init__(self, people=()):
self._members = set(people)
def add(self, person):
self._members.add(person)
def remove(self, person):
self._members.remove(person)

I can't really think of a situation where you couldn't replace a call to your person.add_to_list(list) with a list.append(person) call.
The problem here is that you are adding a dependency between your class Person and the list object that hurts encapsulation. If later you want to replace the list with another data structure that doesn't feature the "append" method (such as a custom container), you'll have to add another method Person.add_to_container to retain consistency, or add a append method to your other data structure (if you can and if this makes sense).

Related

When is it acceptable to shadow a builtin? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 months ago.
Improve this question
We are often told not to shadow builtins like id, int, list and so on. Is it ever acceptable to break this rule?
In general, you should not shadow builtins for several reasons:
doing so prevents you from using the builtin later by calling its common name
other people reading your code will be confused, as they expect list to be the builtin list, and so on
IDEs will (usually) compound this by highlighting your shadowed variables as though they were builtins.
However there are some exceptions.
In class attributes
Using a builtin name in a class attribute doesn't shadow anything at all at runtime, although it might at evaluation. This pattern is quite common:
class Foo:
def __init__(self, foo_id: int) -> None:
self.id = foo_id
Foo().id was unbound until we bound it in the constructor. No shadowing is going on here. The use of id to mean "identity" is sufficiently common that it is probably better than any alternative (like obj_id or identity), and nobody is going to expect Foo().id to be the builtin id.
If you use dataclasses, this might involve shadowing:
#dataclass
class Foo:
# id refers to the builtin `id`
id: int = 0 # assigning a default for the example
# id referes to `Foo.id`, which has the typehint int
foo = Foo(id="id of foo")
# foo.id refers to the *instance* id, which is actually a string.
In this example we do shadow id, briefly. We couldn't use it at evaluation time. But you probably shoudn't be getting the id of an object when defining (not constructing) a class anyhow. Because of the way dataclasses work, we end up with a separate id on the instance foo (which as is demonstrated here, doesn't even have to be the right type!). There are three namespaces at play: the enclosing namespace, in which id means what it normally does, the namespace of the unininstantised class Foo, in which id is 0, and the namespace of the object foo, in which id is a string. We have shadowed in the second of these, but we don't do anything in that namespace later anyhow.
In small functions, when clarity demands
It might be acceptable to shadow the a builtin in a small function which can never conceivably use the shadowed object, if doing so increases clarity. Both of these are probably fine:
def get_object_from_db(id: int) -> CustomObj:
...
def find_enclosing_dir(filename: str) -> Pathlib.Path:
f = _find_file(filename)
dir = f.parent
some_logic_to_validate(dir)
return dir
class Foo:
def __init__(self, id: int) -> None:
self.id = id
The first example shadows id in a function in which the only 'id' we care about is the database id for the data we're retrieving to build our CustomObj. Nevertheless, using obj_id here might well be clearer. The second example shadows dir and uses it for a directory. dir and similar introspection tools are very unlikely to be used in this code; again parent_dir might be clearer, but shadowing here is a stylistic choice. In the third example we shadow id in the constructor for Foo. Again, we cannot now use id, but our code is arguably cleaner.
After thinking twice
In general, avoiding shadowing is a good thing. Inside a small function or method it might make sense to shadow a builtin for clarity. This should only be done when it increases clarity, and the work of refactoring to avoid the shadowing if something changes down the line is tiny. At the very least the function should fit comfortably on one screen. Class attributes can 'shadow' with impunity (as they don't shadow at all), but only a few names open themselves up to being used like this. id and maybe dir make sense, but Foo.list or Foo.str is probably less clear than a descriptive name.

In what use case are classes more useful than a list? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I've been trying to learn classes and i struggle to see when they can become useful. In this example with employees i find using a well organized multi-dimensional list to be easier.
I've provided examples of what i mean with a 2 sets of code that does the exact same thing using lists and classes.
I have tried watching many youtube tutorials on classes but i just can't seem to understand why a multi-dimensional list wouldn't do the job better.
class Employees():
def __init__(self, First_Name, Last_Name, pay = "26000"):
self.First_Name = First_Name
self.Last_Name = Last_Name
self.pay = pay
def greeting(self):
return "Hi, my name is "+self.First_Name+" "+self.Last_Name+" and I earn: "+self.pay
employee_1 = Employees("First_Name_1", "Last_Name_1", "30000")
employee_2 = Employees("First_Name_2", "Last_Name_2")
print(employee_1.greeting())
print(employee_2.greeting())
employee_list = []
def employees(employee_list, First_Name, Last_Name, pay = "26000"):
employee_list += [[First_Name, Last_Name, pay, lambda: greeting(First_Name, Last_Name, pay)]]
def greeting(a, b, c):
return "Hi, my name is "+a+" "+b+" and I earn: "+c
employees(employee_list, "First_Name_1", "Last_Name_1", "30000")
employees(employee_list, "First_Name_2", "Last_Name_2")
print(employee_list[0][3]())
print(employee_list[1][3]())
Classes are much, much more organized and useful than lists. For instance, classes have constructors, methods, attributes, magic methods, superclasses, etc. Classes are most useful when you have a lot of related functions and when you want to create more than one object. They can immensely help with organizing spaghetti code, especially when used with modules. Additionally, many libraries use classes rather than lists.
This is a tenet of Object-Oriented Programming; you're encapsulating your data into classes as opposed to having them declared in a more procedural and ad-hoc fashion.
The immediate advantage is that the first code block is way more readable and its intentions are a lot clearer. You have Employees and you know that you want to print the greetings of them. With the second code block, you have a list which requires you to keep track of where each value is and it's not immediately apparent what employee_list[0][3] represents. It also becomes tough to refactor or fix if the order of your list changes for any reason.
Defining a class provides a convenient syntax for working with data, but it doesn't provide much you couldn't simulate. Consider a slightly different reworking of your list example:
# Employees.__init__
# Instead of named attributes, there are implied positions in a list
# 0 - first name
# 1 - last name
# 2 - salary
def make_employee(first, last, pay="26000"):
return [first, last, pay]
# Employees.greeting
def greeting(employee):
first, last, pay = employee
return "Hi, my name is {} {} and I earn {}".format(first, last, pay)
# Call make_employee instead of Employees
employee1 = make_employee("First_Name_1", "Last_Name_1", "30000")
employee1 = make_employee("First_Name_2", "Last_Name_2")
# Pass employee "objects" to greeting
print(greeting(employee1))
print(greeting(employee2))
Support for inheritance aside, the only real difference here is that you pass an employee "object" explicitly to a function, rather than invoking a method on an object.
Inheritance and namespacing (multiple classes can have a method named greeting without them interfering with each other) are two big advantages to using classes, though.
A class provides better readability and a better abstraction, i.e. you could inherit properties (variables/functions) for a 'dog' from class 'pet'.
Normally a class is better style as it can clean up 'spaghetti' code.
Classes are types and lists are already a built-in type with a constructor and methods (e.g. .sort). So the questions should be whether a type (or class) is sufficient for what you are doing or whether you want to have extra (different) behavior.
You can even extend the list type to your own version and change the behavior and add methods just like this and maintain all the ones that are already there:
class MyList(list):
def my_method(self):
pass
In a more general picture, whether you should use classes or not (functional style) is a matter of taste and philosophy. The big advantage of classes (maintaining state) can also be there biggest pitfall (side effects if not carefully crafted).

Nested classes for cleaner inheritance? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Say I have a class BigObject, which contains within it a number of SmallObjects - and this is the only place where SmallObject is used.
class SmallObject:
pass
class BigObject:
def __init__(self):
self.objects = [SmallObject() for _ in range(10)]
This is all fine, until I want a second version of these two, which inherits some behaviour and overrides some; inheritance seems natural, except:
class NewSmallObject(SmallObject):
pass
class NewBigObject(BigObject):
def __init__(self):
super().__init__()
self.objects = [NewSmallObject for _ in range(10)]
We had to create a bunch of SmallObjects only to immediately override them with NewSmallObjects. Which is not great if e.g. SmallObjects are expensive to create. Also, if we change how the list of SmallObjects is created in BigObject, those changes don't get passed on to NewBigObject.
The solution I came up with was to use nested classes:
class BigObject:
class SmallObject:
pass
def __init__(self):
self.objects = [self.SmallObject() for _ in range(10)]
class NewBigObject(BigObject):
class SmallObject(BigObject.SmallObject):
pass
This deals with both the issues described above. My main concern is that when I looked on StackOverflow for questions about nested classes in Python people keep saying nested classes are unpythonic, and I'd like to understand why. It can also create quite deeply nested classes if SmallObject contains TinyObjects which contain MinisculeObjects etc, which may be the answer?
So my question is basically:
is this a "good" solution to this problem?
if not, what would a good alternative be?
The solution is, as you've already found, to make SmallObject an attribute of the BigObject class.
There is nothing inherently wrong with using a nested class for this, but the readability of your code may suffer if the nested class is very long. Generally speaking, I would recommend to define SmallObject in the global scope though. After all, the Zen of Python says "Flat is better than nested". If you keep nesting TinyObjects and MinisculeObjects, your code will quickly become unreadable:
class BigObject:
class SmallObject:
class TinyObject:
class MinisculeObject:
... # MinisculeObject class body
... # TinyObject class body
... # SmallObject class body
... # BigObject class body
Defining your classes in the global scope only requires minimal extra effort, and looks much cleaner:
class MinisculeObject:
... # MinisculeObject class body
class TinyObject:
miniscule_object_factory = MinisculeObject
... # TinyObject class body
class SmallObject:
tiny_object_factory = TinyObject
... # SmallObject class body
class BigObject:
small_object_factory = SmallObject
... # BigObject class body
This looks like a good use case for the Abstract Factory pattern.
The gist is that there is a class hierarchy for creating things that derive from SmallObject. That way subclasses of BigObject will all use the same interface to get their SmallObject instances.

python singleton vs classmethod [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I want to create a service class that just has one instance, so should I make that class a singleton, or should I make the methods as classmethods?
class PromoService():
#classmethod
def create_promo(cls, promotion):
#do stuff
return promo
class DiscountPromoService(PromoService):
#classmethod
def create_promo(cls, promo, discount):
promo = super(DiscountPromoService, cls).create_promo(promo)
promo.discount = discount
promo.save()
return promo
The reason I don't want to create it as a module is because I would need to subclass my service. What is the most pythonic way to do this, the above-mentioned way or to make a singleton class?
Short answer: In my opinion it would work.
BUT, In pure pattern's sense, I have been wrestling with this question for a while:
Do python class methods and class attributes essentially behave like a Singleton?
All instances of that class have no bearing on them
Only class itself have access to them
There is always one of them
Yes, pure Singleton Pattern comparison would fail plain and simple but surely its not far off?
Wouldn't call myself a python expert, so happy to know views on this be corrected on my assumptions.
If you want a singleton, go with a singleton. The pattern referenced here works well. You would simply need to do something like:
class PromoService():
__metaclass__ = Singleton

Properties defined with property() and #property [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am now trying to properly learn Python, and I am really puzzled by existence of two ways to create object properties: using the #property decorator and the property() method. So both of the following are valid:
class MyClassAt:
def __init__(self, value):
self._time = value
#property
def time(self):
return self._time
#time.setter
def time(self, value):
if value > 0:
self._time = value
else:
raise ValueError('Time should be positive')
and
class MyClassNoAt:
def __init__(self, value):
self._time = value
def get_time(self):
return self._time
def set_time(self, value):
if value > 0:
self._time = value
else:
raise ValueError('Time should be positive')
time = property(fget=get_time, fset=set_time)
Is there an agreement which one to use? What would a Pythonista choose?
They are equivalent, but the first one is preferred as many people find it more readable (while also not cluttering the code and the namespace).
The problem with the second method is that you are defining two methods that you will never use, and they remain in the class.
One would use the second method only if they have to support a very old Python version, which does not support decorators syntactic sugar. Function and method decorators were added in Python 2.4 (while class decorators only in version 2.6), so that is in almost all cases a problem of the past.
In the old days (pre python 2.4), the decorator syntax (i.e. #property) didn't exist yet, so the only way to create decorated functions was to use your second method
time = property(fget=get_time, fset=set_time)
The PEP that lead to decorators gives many reasons for the motivation behind the newer syntax, but perhaps the most important is this one.
The current method of applying a transformation to a function or
method places the actual transformation after the function body. For
large functions this separates a key component of the function's
behavior from the definition of the rest of the function's external
interface.
It's much clearer with the newer # syntax that a property exists by simply skimming through the code and looking at the method/property definitions.
Unfortunately, when they first added the new #property decorator, it only worked for decorating a getter. If you had a setter or a deleter function, you still had to use the old syntax.
Luckily, in Python 2.6, they added the getter, setter, and deleter attributes to properties so you could use the new syntax for all of them.
These days, there's really no reason to ever use the old syntax for decorating functions and classes.

Categories

Resources