Stubbing out functions or classes - python

Can you explain the concept stubbing out functions or classes taken from this article?
class Loaf:
pass
This class doesn't define any methods or attributes, but syntactically, there needs to be something in the definition, so you use pass. This is a Python reserved word that just means “move along, nothing to see here”. It's a statement that does nothing, and it's a good placeholder when you're stubbing out functions or classes.`
thank you

stubbing out functions or classes
This refers to writing classes or functions but not yet implementing them. For example, maybe I create a class:
class Foo(object):
def bar(self):
pass
def tank(self):
pass
I've stubbed out the functions because I haven't yet implemented them. However, I don't think this is a great plan. Instead, you should do:
class Foo(object):
def bar(self):
raise NotImplementedError
def tank(self):
raise NotImplementedError
That way if you accidentally call the method before it is implemented, you'll get an error then nothing happening.

A 'stub' is a placeholder class or function that doesn't do anything yet, but needs to be there so that the class or function in question is defined. The idea is that you can already use certain aspects of it (such as put it in a collection or pass it as a callback), even though you haven't written the implementation yet.
Stubbing is a useful technique in a number of scenarios, including:
Team development: Often, the lead programmer will provide class skeletons filled with method stubs and a comment describing what the method should do, leaving the actual implementation to other team members.
Iterative development: Stubbing allows for starting out with partial implementations; the code won't be complete yet, but it still compiles. Details are filled in over the course of later iterations.
Demonstrational purposes: If the content of a method or class isn't interesting for the purpose of the demonstration, it is often left out, leaving only stubs.

Note that you can stub functions like this:
def get_name(self) -> str : ...
def get_age(self) -> int : ...
(yes, this is valid python code !)
It can be useful to stub functions that are added dynamically to an object by a third party library and you want have typing hints.
Happens to me... once :-)

Ellipsis ... is preferable to pass for stubbing.
pass means "do nothing", whereas ... means "something should go here" - it's a placeholder for future code. The effect is the same but the meaning is different.

Stubbing is a technique in software development. After you have planned a module or class, for example by drawing it's UML diagram, you begin implementing it.
As you may have to implement a lot of methods and classes, you begin with stubs. This simply means that you only write the definition of a function down and leave the actual code for later. The advantage is that you won't forget methods and you can continue to think about your design while seeing it in code.

The reason for pass is that Python is indentation dependent and expects one or more indented statement after a colon (such as after class or function).
When you have no statements (as in the case of a stubbed out function or class), there still needs to be at least one indented statement, so you can use the special pass statement as a placeholder. You could just as easily put something with no effect like:
class Loaf:
True
and that is also fine (but less clear than using pass in my opinion).

Related

Technical differences in doing nothing vs. pass "" [duplicate]

Are these equivalent?
class Empty : pass
and
class Empty:
'''
This class intentionally left blank
'''
The second one seems better for readability and one could put pass at the end but it does not seem necessary.
Is the comment treated as a pass?
Your two codes are almost equivalent, but not quite. pass is just a no-op. The docstring is almost a no-op as well, but it adds a __doc__ attribute to your class object, so there is a small difference.
A version that would be functionally equivalent to using pass would be to use Ellipsis a.k.a. ...:
class Empty: ...
There is nothing special about ... in this case. Any pre-existing object that you don't assign will work just as well. For example, you could replace ... with None, 1, True, etc. The choice of ... is a popular alternative because it is much more aesthetically pleasing. By convention, it means a stub that is to be filled in, while pass usually indicates a deliberate no-op.
Using ... like that will raise a SyntaxError in Python 2. You can use the named Ellipsis object instead, but that is not nearly as pretty.
You may also find this question about the equivalence of pass and return None in functions interesting.
No, they're not equivalent.
Since the implementation of PEP 257, if the first expression in a module, function, or class is a string, that string will be assigned to that module/function/class's __doc__ attribute:
A docstring is a string literal that occurs as the first statement in
a module, function, class, or method definition. Such a docstring
becomes the __doc__ special attribute of that object.
Functionally, the classes are equivalent. However, the difference between having a docstring and not having a docstring can surface when you're creating documentation for your code. Tools like sphinx-autodoc can pick up the docstring and generate documentation for your class, and you'll end up with something like this in your documentation:
class Empty()
This class intentionally left blank
For this reason, it's generally preferable not to use a docstring for this kind of thing. Instead, it would be better to use a comment:
class Empty:
pass # This class intentionally left blank

Python: store expected Exceptions in function attributes

Is it pythonic to store the expected exceptions of a funcion as attributes of the function itself? or just a stinking bad practice.
Something like this
class MyCoolError(Exception):
pass
def function(*args):
"""
:raises: MyCoolError
"""
# do something here
if some_condition:
raise MyCoolError
function.MyCoolError = MyCoolError
And there in other module
try:
function(...)
except function.MyCoolError:
#...
Pro: Anywhere I have a reference to my function, I have also a reference to the exception it can raise, and I don't have to import it explicitly.
Con: I "have" to repeat the name of the exception to bind it to the function. This could be done with a decorator, but it is also added complexity.
EDIT
Why I am doing this is because I append some methods in an irregular way to some classes, where I think that a mixin it is not worth it. Let's call it "tailored added functionality". For instance let's say:
Class A uses method fn1 and fn2
Class B uses method fn2 and fn3
Class C uses fn4 ...
And like this for about 15 classes.
So when I call obj_a.fn2(), I have to import explicitly the exception it may raise (and it is not in the module where classes A, B or C, but in another one where the shared methods live)... which I think it is a little bit annoying. Appart from that, the standard style in the project I'm working in forces to write one import per line, so it gets pretty verbose.
In some code I have seen exceptions stored as class attributes, and I have found it pretty useful, like:
try:
obj.fn()
except obj.MyCoolError:
....
I think it is not Pythonic. I also think that it does not provide a lot of advantage over the standard way which should be to just import the exception along with the function.
There is a reason (besides helping the interpreter) why Python programs use import statements to state where their code comes from; it helps finding the code of the facilities (e. g. your exception in this case) you are using.
The whole idea has the smell of the declaration of exceptions as it is possible in C++ and partly mandatory in Java. There are discussions amongst the language lawyers whether this is a good idea or a bad one, and in the Python world the designers decided against it, so it is not Pythonic.
It also raises a whole bunch of further questions. What happens if your function A is using another function B which then, later, is changed so that it can throw an exception (a valid thing in Python). Are you willing to change your function A then to reflect that (or catch it in A)? Where would you want to draw the line — is using int(text) to convert a string to int reason enough to "declare" that a ValueError can be thrown?
All in all I think it is not Pythonic, no.

What's the point of def some_method(param: int) syntax?

Specifically the ":int" part...
I assumed it somehow checked the type of the parameter at the time the function is called and perhaps raised an exception in the case of a violation. But the following run without problems:
def some_method(param:str):
print("blah")
some_method(1)
def some_method(param:int):
print("blah")
some_method("asdfaslkj")
In both cases "blah" is printed - no exception raised.
I'm not sure what the name of the feature is so I wasn't sure what to google.
EDIT: OK, so it's http://www.python.org/dev/peps/pep-3107/. I can see how it'd be useful in frameworks that utilize metadata. It's not what I assumed it was. Thanks for the responses!
FOLLOW-UP QUESTION - Any thoughts on whether it's a good idea or bad idea to define my functions as def some_method(param:int) if I really only can handle int inputs - even if, as pep 3107 explains, it's just metadata - no enforcement as I originally assumed? At least the consumers of the methods will see clearly what I intended. It's an alternative to documentation. Think this is good/bad/waste of time? Granted, good parameter naming (unlike my contrived example) usually makes it clear what types are meant to be passed in.
it's not used for anything much - it's just there for experimentation (you can read them from within python if you want, for example). they are called "function annotations" and are described in pep 3107.
i wrote a library that builds on it to do things like type checking (and more - for example you can map more easily from JSON to python objects) called pytyp (more info), but it's not very popular... (i should also add that the type checking part of pytyp is not at all efficient - it can be useful for tracking down a bug, but you wouldn't want to use it across an entire program).
[update: i would not recommend using function annotations in general (ie with no particular use in mind, just as docs) because (1) they might eventually get used in a way that you didn't expect and (2) the exact type of things is often not that important in python (more exactly, it's not always clear how best to specify the type of something in a useful way - objects can be quite complex, and often only "parts" are used by any one function, with multiple classes implementing those parts in different ways...). this is a consequence of duck typing - see the "more info" link for related discussion on how python's abstract base classes could be used to tackle this...]
Function annotations are what you make of them.
They can be used for documentation:
def kinetic_energy(mass: 'in kilograms', velocity: 'in meters per second'):
...
They can be used for pre-condition checking:
def validate(func, locals):
for var, test in func.__annotations__.items():
value = locals[var]
msg = 'Var: {0}\tValue: {1}\tTest: {2.__name__}'.format(var, value, test)
assert test(value), msg
def is_int(x):
return isinstance(x, int)
def between(lo, hi):
def _between(x):
return lo <= x <= hi
return _between
def f(x: between(3, 10), y: is_int):
validate(f, locals())
print(x, y)
>>> f(0, 31.1)
Traceback (most recent call last):
...
AssertionError: Var: y Value: 31.1 Test: is_int
Also see http://www.python.org/dev/peps/pep-0362/ for a way to implement type checking.
Not experienced in python, but I assume the point is to annotate/declare the parameter type that the method expects. Whether or not the expected type is rigidly enforced at runtime is beside the point.
For instance, consider:
intToHexString(param:int)
Although the language may technically allow you to call intToHexString("Hello"), it's not semantically meaningful to do so. Having the :int as part of the method declaration helps to reinforce that.
It's basically just used for documentation. When some examines the method signature, they'll see that param is labelled as an int, which will tell them the author of the method expected them to pass an int.
Because Python programmers use duck typing, this doesn't mean you have to pass an int, but it tells you the code is expecting something "int-like". So you'll probably have to pass something basically "numeric" in nature, that supports arithmetic operations. Depending on the method it may have to be usable as an index, or it may not.
However, because it's syntax and not just a comment, the annotation is visible to any code that wants to introspect it. This opens up the possibility of writing a typecheck decorator that can enforce strict type checking on arbitrary functions; this allows you to put the type checking logic in one place, and have each method declare which parameters it wants strictly type checked (by attaching a type annotation) with a minimum on syntax, in a way that is visible to client programmers who are browsing method definitions to find out the interface.
Or you could do other things with those annotations. No standardized meaning has yet been developed. Maybe if someone comes up with a killer feature that uses them and has huge adoption, then it'll one day become part of the Python language, but I suspect the flexibility of using them however you want will be too useful to ever do that.
You might also use the "-> returnValue" notation to indicate what type the function might return.
def mul(a:int, b:int) -> None:
print(a*b)

What is the best-maintained generic functions implementation for Python?

A generic function is dispatched based on the type of all its arguments. The programmer defines several implementations of a function. The correct one is chosen at call time based on the types of its arguments. This is useful for object adaptation among other things. Python has a few generic functions including len().
These packages tend to allow code that looks like this:
#when(int)
def dumbexample(a):
return a * 2
#when(list)
def dumbexample(a):
return [("%s" % i) for i in a]
dumbexample(1) # calls first implementation
dumbexample([1,2,3]) # calls second implementation
A less dumb example I've been thinking about lately would be a web component that requires a User. Instead of requiring a particular web framework, the integrator would just need to write something like:
class WebComponentUserAdapter(object):
def __init__(self, guest):
self.guest = guest
def canDoSomething(self):
return guest.member_of("something_group")
#when(my.webframework.User)
componentNeedsAUser(user):
return WebComponentUserAdapter(user)
Python has a few generic functions implementations. Why would I chose one over the others? How is that implementation being used in applications?
I'm familiar with Zope's zope.component.queryAdapter(object, ISomething). The programmer registers a callable adapter that takes a particular class of object as its argument and returns something compatible with the interface. It's a useful way to allow plugins. Unlike monkey patching, it works even if an object needs to adapt to multiple interfaces with the same method names.
I'd recommend the PEAK-Rules library by P. Eby. By the same author (deprecated though) is the RuleDispatch package (the predecessor of PEAK-Rules). The latter being no longer maintained IIRC.
PEAK-Rules has a lot of nice features, one being, that it is (well, not easily, but) extensible. Besides "classic" dispatch on types ony, it features dispatch on arbitrary expressions as "guardians".
The len() function is not a true generic function (at least in the sense of the packages mentioned above, and also in the sense, this term is used in languages like Common Lisp, Dylan or Cecil), as it is simply a convenient syntax for a call to specially named (but otherwise regular) method:
len(s) == s.__len__()
Also note, that this is single-dispatch only, that is, the actual receiver (s in the code above) determines the method implementation called. And even a hypothetical
def call_special(receiver, *args, **keys):
return receiver.__call_special__(*args, **keys)
is still a single-dispatch function, as only the receiver is used when the method to be called is resolved. The remaining arguments are simply passed on, but they don't affect the method selection.
This is different from multiple-dispatch, where there is no dedicated receiver, and all arguments are used in order to find the actual method implementation to call. This is, what actually makes the whole thing worthwhile. If it were only some odd kind of syntactic sugar, nobody would bother with using it, IMHO.
from peak.rules import abstract, when
#abstract
def serialize_object(object, target):
pass
#when(serialize_object, (MyStuff, BinaryStream))
def serialize_object(object, target):
target.writeUInt32(object.identifier)
target.writeString(object.payload)
#when(serialize_object, (MyStuff, XMLStream))
def serialize_object(object, target):
target.openElement("my-stuff")
target.writeAttribute("id", str(object.identifier))
target.writeText(object.payload)
target.closeElement()
In this example, a call like
serialize_object(MyStuff(10, "hello world"), XMLStream())
considers both arguments in order to decide, which method must actually be called.
For a nice usage scenario of generic functions in Python I'd recommend reading the refactored code of the peak.security which gives a very elegant solution to access permission checking using generic functions (using RuleDispatch).
You can use a construction like this:
def my_func(*args, **kwargs):
pass
In this case args will be a list of any unnamed arguments, and kwargs will be a dictionary of the named ones. From here you can detect their types and act as appropriate.
I'm unable to see the point in these "generic" functions. It sounds like simple polymorphism.
Your "generic" features can be implemented like this without resorting to any run-time type identification.
class intWithDumbExample( int ):
def dumbexample( self ):
return self*2
class listWithDumbExmaple( list ):
def dumpexample( self ):
return [("%s" % i) for i in self]
def dumbexample( a ):
return a.dumbexample()

SQLAlchemy sqlalchemy.sql.expression.select vs. sqlalchemy.sql.expression.Select

So I'm brand new to SQLAlchemy, and I'm trying to use the SQL Expression API to create a SELECT statement that specifies the exact columns to return. I found both a class and a function defined in the sqlalchmey.sql.expressions module and I'm not too sure which to use... Why do they have both a class and a function? When would you use one over the other? And would anyone be willing to explain why they need to have both in their library? It doesn't really make much sense to me to be honest, other than just to confuse me. :) JK
Thanks for the help in advance!
Use the source.
Here's the implementation of the select function, from the source code:
def select(columns=None, whereclause=None, from_obj=[], **kwargs):
"""Returns a ``SELECT`` clause element.
(... long docstring ...)
"""
return Select(columns, whereclause=whereclause, from_obj=from_obj, **kwargs)
So, it is exactly the same.
the expression package provides Python functions to do everything. These functions in some cases return a class instance verbatim from the function's arguments and other times compose an object from several components. It was originally the idea that the functions would be doing a lot more composition than they ended up doing in the end. In any case, the package prefers to stick to pep-8 as far as classes being in CamelCase, functions being all lowercase, and wanted the front end API to be all lower case - so you have the public "constructor" functions.
The SQL expression language is very easy to grok if you start with the tutorial.
I think it's pretty much the same. The documentation says for select (the function):
The returned object is an instance of Select.
As you can pass the select function the same parameters that Select.__init__() accepts, I don't really see a difference. At first glance the arguments of the class constructor seem to be a superset of the function's. But the function can be passed any of the constructor's keyword arguments.

Categories

Resources