Why are classes mostly instantiated through functions? - python

I've been using python for scientific purposes for some years now. I recently became more familiar with class writing, but I feel like I'm missing something regarding the standard way to instantiate classes.
Say I define a class MyClass.
class MyClass:
def __init__(self):
pass
Then I know that I can map x to an instance of MyClass simply with
x = MyClass()
This works well and exactly as I expect.
However, it seems to me that when I use code from standard libraries or from numpy or scipy, I don't create objects in the same way: as far as I know, I generally don't use the name of a class to instantiate it. From what I understand, I'd say that this implies that I use neither class methods nor the default constructor of a class, but rather other functions which are defined outside the class.
For example, numpy's random module uses a class Generator to generate random numbers. However, numpy explicitly recommends not to use the class constructor to get a Generator instance, and to use instead the default_rng function from the random module. So if I want to generate random numbers, I use
rng = numpy.random.default_rng()
to create a Generator instance. This is done without using explicitly the name of the class.
It seems to me that most of the code that I use is written in the latter way. Why is that so? Is it somehow considered bad practice to directly call default class constructors? Is it considered to be a better practice to have separate functions in a module to create class instances? Is it only because some preprocessing must usually be done before creating an instance of a class? (I guess not, because it that case, why not do that in the initialization of the class?)

No, it is not bad practice to use the normal constructor, but sometimes it can be useful to have an alternative constructor.
Reasons for using a function as an alternative constructor to create an object:
(not a complete list and not in any order)
Decouple the creation of an object from its implementation.
Decoupling is often aimed for in OOP.
Hide complexity
The constructor could have many parameters, but often a default object is needed.
Easier to read/write and understand
numpy.random.default_rng() vs numpy.random.Generator(numpy.random.PCG64())
A factory, that creates and returns a (different) object, based on sometimes complex conditions.
e.g. python's open() returns different objects for text files and for binary files.
Where to implement these?
In some other languages, these would be implemented as class methods of the class they instantiate, or even of a new class.
This could be done in python, too, but it is often shorter and more convenient to use, if they are implemented as functions at module level.

I think np.array call to create np.ndarray is probably one of the most common ways in which an object is created by calling another function. Here is an explanation of that.
What is the difference between ndarray and array in numpy?
I cannot answer for all cases in which we use a function to "wrap" the construction of an object, but I have used such functions to simplify object creation in many situations which results in cleaner code. I can speak of such situations.
For example, the underlying class definition may expose a lot of parameters. It may not make sense to ask the user to provide parameters values for all parameters of the class in 99.9% of the cases (say). These "spurious" parameters may be fixed, or may be inferred from other parameter values in most such situations (e.g., parameter b is 2x parameter a in most cases). The code becomes unwieldy in these 99.9% of cases to explicitly provide values for such parameters, so a wrapper function is written to make it cleaner.
It is possible to use default parameters to deal with many such situations, but it may not make sense to push the inference of parameter values into the class' init function itself. For example, while something like b = 2 * a if a is None else b seems reasonable to put in the init function, where a, b are parameters, it may not be so simple practically (e.g., b may have a complex relationship with a, c, d, f, etc or it may be a class object itself), or there may be 1000 such parameter inferences to be made. So it is logical to separate such "glue" code (which is a customization for ease of usage) into another function and keep the base code (which implements a specific functionality) clean and to-the-point.
Do we want to write another class wrapper instead of a function wrapper? In this case, the new class wrapper will present a simplified interface. But writing a class wrapper in this situation is unnecessary since class implies many things, while a function implies just procedural execution.
Note that this happens mostly in case of library type code which has the largest number of use cases where you want to make usage easiest for most people to use. Such issues do not exist for most "user" code where we simply write classes for a specific application. So in practice when we write applications, we should create classes directly using constructors when possible.
There is also the popular Factory Design pattern that some #ekhumoro referenced above which is very similar to this. But based on text-book definition, the Factory Design pattern seems to be restricted to super/sub classes (I could be wrong, and this might be useless semantics).

Related

Class inheritance order for a simulation program

This is a somewhat basic question about the correct order of class inheritance.
Basically I'm trying to write a numerical simulation to solve a physical model, the details are not important (I happen to be writing this in python), it is a well known algorithm solved by iterating over a volume of space.
The classes that I think I need are:
Setup: A class that defines all of the simulation parameters, like volume size, and has methods for checking for correct parameter type, calculating derived parameters etc.
Solver: Contains the actual algorithm for solving
Output: Contains handles for all the plot output and has access to save file etc.
I also need a run method which can run the solver and periodically (with periods defined in Setup) run some of the output functions.
In a high quality program which class would inherit from which? (My guess Output inherits from Solver inherits from Setup)
Where does the run method belong? Maybe there should be some extra base class like Interface that the user interacts with and includes the run method?
There is a concept that encourages the use of composition over inheritance (http://en.wikipedia.org/wiki/Composition_over_inheritance) so I would say that if you really don't need inheritance don't use it (they can be independent objects or functions, which in python are like objects).
If you model this with objects, run() should be in #Solver. Recall that the concept of interface is not necessary in python like in other languages, so you can either use objects, or functions with the algorithms you need.
Are you coming from a Java background by any chance?
First off, you've given no indication that any of your classes should inherit from another. For that matter, you probably don't need as many classes as you think you do.
Solver #Contains the actual algorithm for solving
If it's only one function you might as well just leave it as a free function.
Output #Contains handles for all the plot output and has access to save file etc.
If the functions don't have shared state, it could just as easily be a module.
As for the run method, just stick it wherever it is most convenient. The nice thing about Python is that you can start prototyping without any classes, and just refactor into a class whenever you find yourself passing the same set of data around a lot.

Pythonic way to add several methods to several classes?

What is the most pythonic way to add several identical methods to several classes?
I could use a class decorator, but that seems to bring in a fair bit of complication and its harder to write and read than the other methods.
I could make a base class with all the methods and let the other classes inherit, but then for some of the classes I would be very tempted to allow multiple inheritance, which I have read frequently is to be avoided or minimized. Also, the "is-a" relationship does not apply.
I could also change them from being methods to make them stand-alone functions which just expect their values to supply the appropriate properties through duck-typing. This is in some ways clean, but it is less object oriented and makes it less clear when a function could be used on that type of object.
I could use delegation, but this requires all of the classes that want to call to have methods calling up to the helper classes methods. This would make the code base much longer than the other options and require adding a method to delegate every time I want to add a new function to the helper class.
I know giving one class an instance of the other as an attribute works nicely in some cases, but it does not always work cleanly and can make calls more complicated than they would be otherwise.
After playing around with it a bit, I am leaning towrds inheritance even when it leads to multiple inheritance. But I hesitate due to numerous texts warning very strongly against ever allowing multiple inheritance and some (such as the wikipedia entry) going so far as to say that inheritance just for code reuse such as this should be minimized.
This may be more clear with an example, so for a simplified example say we are dealing with numerous distinct classes which all have a location on an x, y grid. There are a lot of operations we might want to make methods of everything with an x, y location, such as a method to get the distance between two such entities, or the relative direction, midpoint between them, etc.
What would be the most pythonic way to give all such classes access to these methods that rely only on having x and y as attributes?
For your specific example, I would try to take advantage of duck-typing. Write plain simple functions that take objects which are assumed to have x and y attributes:
def distance(a, b):
"""
Returns distance between `a` and `b`.
`a` and `b` should have `x` and `y` attributes.
"""
return math.sqrt((a.x-b.x)**2 + (a.y-b.y)**2)
Very simple. To make it clear how the function can be used, just document it.
Plain old functions are best for this problem. E.g. instead of this ...
class BaseGeoObject:
def distanceFromZeroZero(self):
return math.sqrt(self.x()**2 + self.y()**2)
...
... just have functions like this one:
def distanceFromZeroZero(point):
return math.sqrt(point.x()**2 + point.y()**2)
This is a good solution because it's also easy to test - it's not necessary to subclass just to exercise a specific function.

Breaking up functions into passive (algorithm) and active (execution) objects

Summary
What are the pros and cons of splitting pure functions into passive objects that describe the algorithms and active objects that can execute those algorithms? Note that the situation is greatly simplified by the fact that the functions have no side effects.
Detail
The portion of the code I'm writing (in Python 3) will largely adhere to functional programming.
There is some (immutable) data. There are some algorithms. And I need to apply those algorithms to the data, and get the result.
The algorithms could be represented as regular functions, which will be transformed using standard operations (e.g., I may compose two functions, then freeze some parameters using functools.partial, then passed the resulting function to another function as an argument). Many of the lower-level functions would be memoized for performance reasons.
But an idea occurred to me that perhaps I should instead represent algorithms as passive objects. Such objects wouldn't be able to execute anything themselves. When I'm ready to execute, I'll feed the algorithm object and all the inputs it expects into a special "computation" object. This would match my mental model of an algorithms far better, but I'm concerned that I might be missing some problems with this approach.
Algorithm objects could be implemented in a variety of ways; perhaps even multiple implementations could be allowed. Let's say my algorithms are instances of an abstract class Algorithm; then its subclasses could represent:
strings of text in a domain-specific language that I'll create
some kind of execution trees that I'll construct
even regular Python functions
I have never done this before, so I wanted to get some feedback on this idea. Does it offer any real design advantages, apart from my subjective feeling that it's more "natural"? Does it lead to any problems?
I don't think the design offers any major advantage or disadvantage.
Assuming that any computation object can run any Algorithm, then your class Algorithm presumably is going to have a function called something like execute that knows how to run the algorithm. Name that function __call__, and now your Algorithm class is exactly like a Python callable object (including functions).
For your strings of DSL code: under your design you'd represent them as a subclass of Algorithm that overrides execute to run an interpreter. Under the other design you'd just do something like:
def createDSLAlgorithm(code):
def coderunner(*args, **kwargs):
DSLInterpreter().interpret(code, *args, **kwargs)
return coderunner
And similar to create a function that when called will execute a specified expression tree.
Of course I might be missing something that you're planning to put into your Algorithm design that's not possible for functions. Not all Python functions have mutable attributes, for example. But since user-defined functions can be closures, can have attributes, and any object can "behave like a function" just by implementing __call__, I suspect it's different names for the same thing.
Choosing your own names, of course, is a small advantage if it aids code readability. And it might feel a bit more natural to attach attributes to "objects" than it does to attach them to "functions", if your computation objects are going to interrogate certain known attributes of Algorithms in order to help decide what to do when computing them (for example whether or not to memoize).

pros and cons of using factory vs regular constructor

(Using Python 3.2, though I doubt it matters.)
I have class Data, class Rules, and class Result. I use lowercase to denote an instance of the class.
A rules object contains rules that, if applied to a data object, can create a result object.
I'm deciding where to put the (rather complicated and evolving) code that actually applies the rules to the data. I can see two choices:
Put that code inside a class Result method, say parse_rules. Result constructor would take as an argument a rules object, and pass it onto self.parse_rules.
Put that code inside a new class ResultFactory. ResultFactory would be a singleton class, which has a method, say build_result, which takes rules as an argument and returns a newly built result object.
What are the pros and cons of the two approaches?
The GRASP design principles provide guidelines for assigning responsibility to classes and objects in object-oriented design. For example, the Creator pattern suggests: In general, a class B should be responsible for creating instances of class A if one, or preferably more, of the following apply:
Instances of B contains or compositely aggregates instances of A
Instances of B record instances of A
Instances of B closely use instances of A
Instances of B have the initializing information for instances of A and pass it on creation.
In your example, you have complicated and evolving code for applying rules to data. That suggests the use of the Factory Pattern.
Putting the code in Results is contraindicated because 1) results don't create results, and 2) results aren't the information expert (i.e. they don't have most of the knowledge that is needed).
In short, the ResultFactory seems like a reasonable place to concentrate the knowledge of how to apply rules to data to generate results. If you were to try to push all of this logic into class constructors for either Results or Rules, it would lead to tight coupling and loss of cohesion.
Third scenario:
You may want to consider a third scenario:
Put the code inside the method Rules.__call__.
Instantiating Result like: result = rules(data)
Pros:
Results can be totally unaware of the Rules that generates them (and maybe even of the original Data).
Every Rules sub-class can customize its Result creation.
It feels natural (to me): Rules applied to Data yield Result.
And you'll have a couple of GRASP principle on your side:
Creator: Instances of Rules have the initializing information for instances of Result and pass it on creation.
Information Expert: Information Expert will lead to placing the responsibility on the class with the most information required to fulfill it.
Side effects:
Coupling: You'll raise the coupling between Rules and Data:
You need to pass the whole data set to every Rules
Which means that each Rules should be able to decide on which data it'll be applied.
Why not put the rules in their own classes? If you create a RuleBase class, then each rule can derive from it. This way, polymorphism can be used when Data needs rules applied. Data doesn't need to know or care which Rule instances were applied (unless Data itself is the one who knows which rules should be applied).
When rules need to be invoked, a data instance can all RuleBase.ExecuteRules() and pass itself in as an argument. The correct subclass of Rule can be chosen directly from Data, if Data knows which Rule is necessary. Or some other design pattern can be used, like Chain of Responsibility, where Data invokes the pattern and lets a Result come back.
This would make a great whiteboard discussion.
Can you make ResultFactory a pure function? It's not useful to create a singleton object if all you need is a function.
Well, the second is downright silly, especially with all the singletonness. If Result requires Rules to create an instance, and you can't create one without it, it should take that as an argument to __init__. No pattern shopping necessary.

How do we use sin,cos,tan generically (including user-defined types) in Python?

Edit: Let me try to reword and improve my question. The old version is attached at the bottom.
What I am looking for is a way to express and use free functions in a type-generic way. Examples:
abs(x) # maps to x.__abs__()
next(x) # maps to x.__next__() at least in Python 3
-x # maps to x.__neg__()
In these cases the functions have been designed in a way that allows users with user-defined types to customize their behaviour by delegating the work to a non-static method call. This is nice. It allows us to write functions that don't really care about the exact parameter types as long as they "feel" like objects that model a certain concept.
Counter examples: Functions that can't be easily used generically:
math.exp # only for reals
cmath.exp # takes complex numbers
Suppose, I want to write a generic function that applies exp on a list of number-like objects. What exp function should I use? How do I select the correct one?
def listexp(lst):
return [math.exp(x) for x in lst]
Obviously, this won't work for lists of complex numbers even though there is an exp for complex numbers (in cmath). And it also won't work for any user-defined number-like type which might offer its own special exp function.
So, what I'm looking for is a way to deal with this on both sides -- ideally without special casing a lot of things. As a writer of some generic function that does not care about the exact types of parameters I want to use the correct mathematical functions that is specific to the types involved without having to deal with this explicitly. As a writer of a user-defined type, I would like to expose special mathematical functions that have been augmented to deal with additional data stored in those objects (similar to the imaginary part of complex numbers).
What is the preferred pattern/protocol/idiom for doing that? I did not yet test numpy. But I downloaded its source code. As far as I know, it offers a sin function for arrays. Unfortunately, I haven't found its implementation yet in the source code. But it would be interesting to see how they managed to pick the right sin function for the right type of numbers the array currently stores.
In C++ I would have relied on function overloading and ADL (argument-dependent lookup). With C++ being statically typed, it should come as no surprise that this (name lookup, overload resolution) is handled completely at compile-time. I suppose, I could emulate this at runtime with Python and the reflective tools Python has to offer. But I also know that trying to import a coding style into another language might be a bad idea and not very idiomatic in the new language. So, if you have a different idea for an approach, I'm all ears.
I guess, somewhere at some point I need to manually do some type-dependent dispatching in an extensible way. Maybe write a module "tgmath" (type generic math) that comes with support for real and complex support as well as allows others to register their types and special case functions... Opinions? What do the Python masters say about this?
TIA
Edit: Apparently, I'm not the only one who is interested in generic functions and type-dependent overloading. There is PEP 3124 but it is in draft state since 4 years ago.
Old version of the question:
I have a strong background in Java and C++ and just recently started learning Python. What I'm wondering about is: How do we extend mathematical functions (at least their names) so they work on other user-defined types? Do these kinds of functions offer any kind of extension point/hook I can leverage (similar to the iterator protocol where next(obj) actually delegates to obj.__next__, etc) ?
In C++ I would have simply overloaded the function with the new parameter type and have the compiler figure out which of the functions was meant using the argument expressions' static types. But since Python is a very dynamic language there is no such thing as overloading. What is the preferred Python way of doing this?
Also, when I write custom functions, I would like to avoid long chains of
if isinstance(arg,someClass):
suchandsuch
elif ...
What are the patterns I could use to make the code look prettier and more Pythonish?
I guess, I'm basically trying to deal with the lack of function overloading in Python. At least in C++ overloading and argument-dependent lookup is an important part of good C++ style.
Is it possible to make
x = udt(something) # object of user-defined type that represents a number
y = sin(x) # how do I make this invoke custom type-specific code for sin?
t = abs(x) # works because abs delegates to __abs__() which I defined.
work? I know I could make sin a non-static method of the class. But then I lose genericity because for every other kind of number-like object it's sin(x) and not x.sin().
Adding a __float__ method is not acceptable since I keep additional information in the object such as derivatives for "automatic differentiation".
TIA
Edit: If you're curious about what the code looks like, check this out. In an ideal world I would be able to use sin/cos/sqrt in a type-generic way. I consider these functions part of the objects interface even if they are "free functions". In __somefunction I did not qualify the functions with math. nor __main__.. It just works because I manually fall back on math.sin (etc) in my custom functions via the decorator. But I consider this to be an ugly hack.
you can do this, but it works backwards. you implement __float__() in your new type and then sin() will work with your class.
in other words, you don't adapt sine to work on other types; you adapt those types so that they work with sine.
this is better because it forces consistency. if there is no obvious mapping from your object to a float then there probably isn't a reasonable interpretation of sin() for that type.
[sorry if i missed the "__float__ won't work" part earlier; perhaps you added that in response to this? anyway, for convincing proof that what you want isn't possible, python has the cmath library to add sin() etc for complex numbers...]
If you want the return type of math.sin() to be your user-defined type, you appear to be out of luck. Python's math library is basically a thin wrapper around a fast native IEEE 754 floating point math library. If you want to be internally consistent and duck-typed, you can at least put the extensibility shim that python is missing into your own code.
def sin(x):
try:
return x.__sin__()
except AttributeError:
return math.sin(x)
Now you can import this sin function and use it indiscriminately wherever you used math.sin previously. It's not quite as pretty as having math.sin pick up your duck-typing automatically but at least it can be consistent within your codebase.
Define your own versions in a module. This is what's done in cmath for complex number and in numpy for arrays.
Typically the answer to questions like this is "you don't" or "use duck typing". Can you provide a little more detail about what you want to do? Have you looked at the remainder of the protocol methods for numeric types?
http://docs.python.org/reference/datamodel.html#emulating-numeric-types
Ideally, you will derive your user-defined numeric types from a native Python type, and the math functions will just work. When that isn't possible, perhaps you can define __int__() or __float__() or __complex__() or __long__() on the object so it knows how to convert itself to a type the math functions can handle.
When that isn't feasible, for example if you wish to take a sin() of an object that stores x and y displacement rather than an angle, you will need to provide either your own equivalents of such functions (usually as a method of the class) or a function such as to_angle() to convert the object's internal representation to the one needed by Python.
Finally, it is possible to provide your own math module that replaces the built-in math functions with your own varieties, so if you want to allow math on your classes without any syntax changes to the expressions, it can be done in that fashion, although it is tricky and can reduce performance, since you'll be doing (e.g.) a fair bit of preprocessing in Python before calling the native implementations.

Categories

Resources