I was just working on a large class hierarchy and thought that probably all methods in a class should be classmethods by default.
I mean that it is very rare that one needs to change the actual method for an object, and whatever variables one needs can be passed in explicitly. Also, this way there would be lesser number of methods where people could change the object itself (more typing to do it the other way), and people would be more inclined to be "functional" by default.
But, I am a newb and would like to find out the flaws in my idea (if there are any :).
Having classmethods as a default is a well-known but outdated paradigm. It's called Modular Programming. Your classes become effectively modules this way.
The Object-Oriented Paradigm (OOP) is mostly considered superior to the Modular Paradigm (and it is younger). The main difference is exactly that parts of code are associated by default to a group of data (called an object) — and thus not classmethods.
It turns out in practice that this is much more useful. Combined with other OOP architectural ideas like inheritance this offers directer ways to represent the models in the heads of the developers.
Using object methods I can write abstract code which can be used for objects of various types; I don't have to know the type of the objects while writing my routine. E. g. I can write a max() routine which compares the elements of a list with each other to find the greatest. Comparing then is done using the > operator which is effectively an object method of the element (in Python this is __gt__(), in C++ it would be operator>() etc.). Now the object itself (maybe a number, maybe a date, etc.) can handle the comparison of itself with another of its type. In code this can be written as short as
a > b # in Python this calls a.__gt__(b)
while with only having classmethods you would have to write it as
type(a).__gt__(a, b)
which is much less readable.
If the method doesn't access any of an object's state, but is specific to that object's class, then it's a good candidate for being a classmethod.
Otherwise if it's more general, then just use a function defined at module level, no need to make it belong to a specific class.
I've found that classmethods are actually pretty rare in practice, and certainly not the default. There should be plenty of good code out there (on e.g. github) to get examples from.
Related
I've been programming in Python for a long time, but I still can't understand why classes base their attribute lookup on the __dict__ dictionary by default instead of the faster __slots__ tuple.
Wouldn't it make more sense to use the more efficient and less flexible __slots__ method as the default implementation and instead make the more flexible, but slower __dict__ method optional?
Also, if a class uses __slots__ to store its attributes, there's no chance of mistakenly creating new attributes like this:
class Object:
__slots__ = ("name",)
def __init__(self, name):
self.name = name
obj = Object()
# Note the typo here
obj.namr = "Karen"
So, I was wondering if there's a valid reason why Python defaults to accessing instance attributes through __dict__ instead of through __slots__.
Python is designed to be an extremely flexible language, and allows objects to modify themselves in many interesting ways at runtime. Making a change to prevent that kind of flexibility would break a massive amount of other people's code, so for the sake of backwards compatibility I don't think it will happen any time soon (if at all).
As well as this, due to the way Python code is interpreted, it is very difficult to design a system that can look ahead and determine exactly what variables a particular class will use ahead of time, especially given the existence of setattr() and other similar functions, which can modify the state of other objects in unpredictable ways.
In summary, Python is designed to value flexibility over performance, and as such, having __slots__ be an optional technique to speed up parts of your code is a trade-off that you choose to make if you wish to write your code in Python. I can't answer whether this is a worthwhile design decision for you, since it's entirely based on opinion.
If you wish to have a bit more safety to prevent issues such as the one you described, there are tools such as mypy and pylint which can catch that sort of error.
Are there any conventions on how to implement services in Django? Coming from a Java background, we create services for business logic and we "inject" them wherever we need them.
Not sure if I'm using python/django the wrong way, but I need to connect to a 3rd party API, so I'm using an api_service.py file to do that. The question is, I want to define this service as a class, and in Java, I can inject this class wherever I need it and it acts more or less like a singleton. Is there something like this I can use with Django or should I build the service as a singleton and get the instance somewhere or even have just separate functions and no classes?
TL;DR It's hard to tell without more details but chances are you only need a mere module with a couple plain functions or at most just a couple simple classes.
Longest answer:
Python is not Java. You can of course (technically I mean) use Java-ish designs, but this is usually not the best thing to do.
Your description of the problem to solve is a bit too vague to come with a concrete answer, but we can at least give you a few hints and pointers (no pun intended):
1/ Everything is an object
In python, everything (well, everything you can find on the RHS of an assignment that is) is an object, including modules, classes, functions and methods.
One of the consequences is that you don't need any complex framework for dependency injection - you just pass the desired object (module, class, function, method, whatever) as argument and you're done.
Another consequence is that you don't necessarily need classes for everything - a plain function or module can be just enough.
A typical use case is the strategy pattern, which, in Python, is most often implemented using a mere callback function (or any other callable FWIW).
2/ a python module is a singleton.
As stated above, at runtime a python module is an object (of type module) whose attributes are the names defined at the module's top-level.
Except for some (pathological) corner cases, a python module is only imported once for a given process and is garanteed to be unique. Combined with the fact that python's "global" scope is really only "module-level" global, this make modules proper singletons, so this design pattern is actually already builtin.
3/ a python class is (almost) a singleton
Python classes are objects too (instance of type type, directly or indirectly), and python has classmethods (methods that act on the class itself instead of acting on the current instance) and class-level attributes (attributes that belong to the class object itself, not to it's instances), so if you write a class that only has classmethods and class attributes, you technically have a singleton - and you can use this class either directly or thru instances without any difference since classmethods can be called on instances too.
The main difference here wrt/ "modules as singletons" is that with classes you can use inheritance...
4/ python has callables
Python has the concept of "callable" objects. A "callable" is an object whose class implements the __call__() operator), and each such object can be called as if it was a function.
This means that you can not only use functions as objects but also use objects as functions - IOW, the "functor" pattern is builtin. This makes it very easy to "capture" some context in one part of the code and use this context for computations in another part.
5/ a python class is a factory
Python has no new keyword. Pythonc classes are callables, and instanciation is done by just calling the class.
This means that you can actually use a class or function the same way to get an instance, so the "factory" pattern is also builtin.
6/ python has computed attributes
and beside the most obvious application (replacing a public attribute by a pair of getter/setter without breaking client code), this - combined with other features like callables etc - can prove to be very powerful. As a matter of fact, that's how functions defined in a class become methods
7/ Python is dynamic
Python's objects are (usually) dict-based (there are exceptions but those are few and mostly low-level C-coded classes), which means you can dynamically add / replace (and even remove) attributes and methods (since methods are attributes) on a per-instance or per-class basis.
While this is not a feature you want to use without reasons, it's still a very powerful one as it allows to dynamically customize an object (remember that classes are objects too), allowing for more complex objects and classes creation schemes than what you can do in a static language.
But Python's dynamic nature goes even further - you can use class decorators and/or metaclasses to taylor the creation of a class object (you may want to have a look at Django models source code for a concrete example), or even just dynamically create a new class using it's metaclass and a dict of functions and other class-level attributes.
Here again, this can really make seemingly complex issues a breeze to solve (and avoid a lot of boilerplate code).
Actually, Python exposes and lets you hook into most of it's inners (object model, attribute resolution rules, import mechanism etc), so once you understand the whole design and how everything fits together you really have the hand on most aspects of your code at runtime.
Python is not Java
Now I understand that all of this looks a bit like a vendor's catalog, but the point is highlight how Python differs from Java and why canonical Java solutions - or (at least) canonical Java implementations of those solutions - usually don't port well to the Python world. It's not that they don't work at all, just that Python usually has more straightforward (and much simpler IMHO) ways to implement common (and less common) design patterns.
wrt/ your concrete use case, you will have to post a much more detailed description, but "connecting to a 3rd part API" (I assume a REST api ?) from a Django project is so trivial that it really doesn't warrant much design considerations by itself.
In Python you can write the same as Java program structure. You don't need to be so strongly typed but you can. I'm using types when creating common classes and libraries that are used across multiple scripts.
Here you can read about Python typing
You can do the same here in Python. Define your class in package (folder) called services
Then if you want singleton you can do like that:
class Service(object):
instance = None
def __new__(cls):
if cls.instance is not None:
return cls.instance
else:
inst = cls.instance = super(Service, cls).__new__()
return inst
And now you import it wherever you want in the rest of the code
from services import Service
Service().do_action()
Adding to the answer given by bruno desthuilliers and TreantBG.
There are certain questions that you can ask about the requirements.
For example one question could be, does the api being called change with different type of objects ?
If the api doesn't change, you will probably be okay with keeping it as a method in some file or class.
If it does change, such that you are calling API 1 for some scenario, API 2 for some and so on and so forth, you will likely be better off with moving/abstracting this logic out to some class (from a better code organisation point of view).
PS: Python allows you to be as flexible as you want when it comes to code organisation. It's really upto you to decide on how you want to organise the code.
I have a program in python that includes a class that takes a function as an argument to the __init__ method. This function is stored as an attribute and used in various places within the class. The functions passed in can be quite varied, and passing in a key and then selecting from a set of predefined functions would not give the same degree of flexibility.
Now, apologies if a long list of questions like this is not cool, but...
Is their a standard way to achieve this in a language where functions aren't first class objects?
Do blocks, like in smalltalk or objective-C, count as functions in this respect?
Would blocks be the best way to do this in those languages?
What if there are no blocks?
Could you add a new method at runtime?
In which languages would this be possible (and easy)?
Or would it be better to create an object with a single method that performs the desired operation?
What if I wanted to pass lots of functions, would I create lots of singleton objects?
Would this be considered a more object oriented approach?
Would anyone consider doing this in python, where functions are first class objects?
I don't understand what you mean by "equivalent... using an object oriented approach". In Python, since functions are (as you say) first-class objects, how is it not "object-oriented" to pass functions as arguments?
a standard way to achieve this in a language where functions aren't first class objects?
Only to the extent that there is a standard way of functions failing to be first-class objects, I would say.
In C++, it is common to create another class, often called a functor or functionoid, which defines an overload for operator(), allowing instances to be used like functions syntactically. However, it's also often possible to get by with plain old function-pointers. Neither the pointer nor the pointed-at function is a first-class object, but the interface is rich enough.
This meshes well with "ad-hoc polymorphism" achieved through templates; you can write functions that don't actually care whether you pass an instance of a class or a function pointer.
Similarly, in Python, you can make objects register as callable by defining a __call__ method for the class.
Do blocks, like in smalltalk or objective-C, count as functions in this respect?
I would say they do. At least as much as lambdas count as functions in Python, and actually more so because they aren't crippled the way Python's lambdas are.
Would blocks be the best way to do this in those languages?
It depends on what you need.
Could you add a new method at runtime? In which languages would this be possible (and easy)?
Languages that offer introspection and runtime access to their own compiler. Python qualifies.
However, there is nothing about the problem, as presented so far, which suggests a need to jump through such hoops. Of course, some languages have more required boilerplate than others for a new class.
Or would it be better to create an object with a single method that performs the desired operation?
That is pretty standard.
What if I wanted to pass lots of functions, would I create lots of singleton objects?
You say this as if you might somehow accidentally create more than one instance of the class if you don't write tons of boilerplate in an attempt to prevent yourself from doing so.
Would this be considered a more object oriented approach?
Again, I can't fathom your understanding of the term "object-oriented". It doesn't mean "creating lots of objects".
Would anyone consider doing this in python, where functions are first class objects?
Not without a need for the extra things that a class can do and a function can't. With duck typing, why on earth would you bother?
I'm just going to answer some of your questions.
As they say in the Scheme community, "objects are a poor man's closures" (closures being first-class functions). Blocks are usually just syntactic sugar for closures. For languages that do not have closures, there exist various solutions.
One of the common solutions is to use operator overloading: C++ has a notion of function objects, which define a member operator() ("operator function call"). Python has a similar overloading mechanism, where you define __call__:
class Greeter(object):
def __init__(self, who):
self.who = who
def __call__(self):
print("Hello, %s!" % who)
hello = Greeter("world")
hello()
Yes, you might consider using this in Python instead of storing functions in objects, since functions can't be pickled.
In languages without operator overloading, you'll see things like Guava's Function interface.
You could use the strategy pattern. Basically you pass in an object with a known interface, but different behavior. It's like passing function but one that's wrapped up in an object.
In Smalltalk you'd mostly be using blocks. You can also create classes and instances at runtime.
I was recently going over a coding problem I was having and someone looking at the code said that subclassing list was bad (my problem was unrelated to that class). He said that you shouldn't do it and that it came with a bunch of bad side effects. Is this true?
I'm asking if list is generally bad to subclass and if so, what are the reasons. Alternately, what should I consider before subclassing list in Python?
The abstract base classes provided in the collections module, particularly MutableSequence, can be useful when implementing list-like classes. These are available in Python 2.6 and later.
With ABCs you can implement the "core" functionality of your class and it will provide the methods which logically depend on what you've defined.
For example, implementing __getitem__ in a collections.Sequence-derived class will be enough to provide your class with __contains__, __iter__, and other methods.
You may still want to use a contained list object to do the heavy lifting.
There are no benefits to subclassing list. None of the methods will use any methods you override, so you can have unexpected bugs. Further, it's very often confusing doing things like self.append instead of self.foos.append or especially self[4] rather than self.foos[4] to access your data. You can make something that works exactly like a list or (better) howevermuch like a list you really want while just subclassing object.
I think the first question I'd ask myself is, "Is my new object really a list?". Does it walk like a list, talk like a list? Or is is something else?
If it is a list, then all the standard list methods should all make sense.
If the standard list methods don't make sense, then your object should contain a list, not be a list.
In old python (2.2?) sub-classing list was a bad idea for various technical reasons, but in a modern python it is fine.
Nick is correct.
Also, while I can't speak to Python, in other OO languages (Java, Smalltalk) subclassing a list is a bad idea. Inheritance in general should be avoided and delegation-composition used instead.
Rather, you make a container class and delegate calls to the list. The container class has a reference to the list and you can even expose the calls and returns of the list in your own methods.
This adds flexibility and allows you to change the implementation (a different list type or data structure) later w/o breaking any code. If you want your list to do different listy-type things then your container can do this and use the plain list as a simple data structure.
Imagine if you had 47 different uses of lists. Do you really want to maintain 47 different subclasses?
Instead you could do this via the container and interfaces. One class to maintain and allow people to call your new and improved methods via the interface(s) with the implementation remaining hidden.
http://docs.python.org/3.0/whatsnew/3.0.html says it lists whats new, but in my opinion, it only lists differences, so has does anybody know of any completely new Python features, introduced in release 3.x?
To Avoid Confusion, I will define a completely new feature as something that has never been used in any other code before, somehting you walk up to and go "Ooh, shiny!". E.g. a function to make aliens invade, etc.
Many of the completely new features introduced in 3.0 were also backported to 2.6, a deliberate choice. However, this was not practical in all cases, so some of the new features remained Python 3 - only.
How metaclasses work, is probably the biggest single new feature. The syntax is clearly better than 2.*'s __metaclass__ assignment...:
class X(abase, metaclass=Y):
but more importantly, the new syntax means the compiler knows the metaclass to use before it processes the class body, and so the metaclass can finally influence the way the class body is processed -- this was not possible in 2.*. Specifically, the metaclass's new __prepare__ method can return any writable mapping, and if so then that's used instead of a regular dict to record the assignments (and assigning keywords such as def) performed in the class body. In particular, this lets the order of the class body finally get preserved exactly as it's written down, as well as allowing the metaclass, if it so chooses, to record multiple assignments/definitions for any name in the class body, rather than just the last assignment or definition performed for that name. This hugely broadens the applicability of classes with appropriate custom metaclasses, compared to what was feasible in 2.*.
Another syntax biggie is annotations -- see the PEP I'm pointing to for details. Python's standard library gives no special semantics to annotations, but exactly because of that third-party frameworks and tools are empowered to apply any semantics they wish -- such tasks as type-checking for function arguments are hereby allowed, though not directly performed by the standard Python library.
There are of course many others (the new "views" concept embodied by such methods as dict's .keys &c in 3.*, keyword-only arguments, better sequence unpacking, nonlocal for more powerful closures, ...), of varying heft, but all pretty useful and well-designed.
The section New Syntax lists, well, the new syntax in Python 3.x. I think it's debatable sometimes whether stuff is new or changed. E.g. exception chaining (PEP 3134): is that a new feature, or is it a change to the exception machinery?
In general, I recommend looking at all the PEPs listed in the document. They are the major changes/new features.