A generic function is dispatched based on the type of all its arguments. The programmer defines several implementations of a function. The correct one is chosen at call time based on the types of its arguments. This is useful for object adaptation among other things. Python has a few generic functions including len().
These packages tend to allow code that looks like this:
#when(int)
def dumbexample(a):
return a * 2
#when(list)
def dumbexample(a):
return [("%s" % i) for i in a]
dumbexample(1) # calls first implementation
dumbexample([1,2,3]) # calls second implementation
A less dumb example I've been thinking about lately would be a web component that requires a User. Instead of requiring a particular web framework, the integrator would just need to write something like:
class WebComponentUserAdapter(object):
def __init__(self, guest):
self.guest = guest
def canDoSomething(self):
return guest.member_of("something_group")
#when(my.webframework.User)
componentNeedsAUser(user):
return WebComponentUserAdapter(user)
Python has a few generic functions implementations. Why would I chose one over the others? How is that implementation being used in applications?
I'm familiar with Zope's zope.component.queryAdapter(object, ISomething). The programmer registers a callable adapter that takes a particular class of object as its argument and returns something compatible with the interface. It's a useful way to allow plugins. Unlike monkey patching, it works even if an object needs to adapt to multiple interfaces with the same method names.
I'd recommend the PEAK-Rules library by P. Eby. By the same author (deprecated though) is the RuleDispatch package (the predecessor of PEAK-Rules). The latter being no longer maintained IIRC.
PEAK-Rules has a lot of nice features, one being, that it is (well, not easily, but) extensible. Besides "classic" dispatch on types ony, it features dispatch on arbitrary expressions as "guardians".
The len() function is not a true generic function (at least in the sense of the packages mentioned above, and also in the sense, this term is used in languages like Common Lisp, Dylan or Cecil), as it is simply a convenient syntax for a call to specially named (but otherwise regular) method:
len(s) == s.__len__()
Also note, that this is single-dispatch only, that is, the actual receiver (s in the code above) determines the method implementation called. And even a hypothetical
def call_special(receiver, *args, **keys):
return receiver.__call_special__(*args, **keys)
is still a single-dispatch function, as only the receiver is used when the method to be called is resolved. The remaining arguments are simply passed on, but they don't affect the method selection.
This is different from multiple-dispatch, where there is no dedicated receiver, and all arguments are used in order to find the actual method implementation to call. This is, what actually makes the whole thing worthwhile. If it were only some odd kind of syntactic sugar, nobody would bother with using it, IMHO.
from peak.rules import abstract, when
#abstract
def serialize_object(object, target):
pass
#when(serialize_object, (MyStuff, BinaryStream))
def serialize_object(object, target):
target.writeUInt32(object.identifier)
target.writeString(object.payload)
#when(serialize_object, (MyStuff, XMLStream))
def serialize_object(object, target):
target.openElement("my-stuff")
target.writeAttribute("id", str(object.identifier))
target.writeText(object.payload)
target.closeElement()
In this example, a call like
serialize_object(MyStuff(10, "hello world"), XMLStream())
considers both arguments in order to decide, which method must actually be called.
For a nice usage scenario of generic functions in Python I'd recommend reading the refactored code of the peak.security which gives a very elegant solution to access permission checking using generic functions (using RuleDispatch).
You can use a construction like this:
def my_func(*args, **kwargs):
pass
In this case args will be a list of any unnamed arguments, and kwargs will be a dictionary of the named ones. From here you can detect their types and act as appropriate.
I'm unable to see the point in these "generic" functions. It sounds like simple polymorphism.
Your "generic" features can be implemented like this without resorting to any run-time type identification.
class intWithDumbExample( int ):
def dumbexample( self ):
return self*2
class listWithDumbExmaple( list ):
def dumpexample( self ):
return [("%s" % i) for i in self]
def dumbexample( a ):
return a.dumbexample()
Related
To implement prettified xml, I have written following code
def prettify_by_response(response, prettify_func):
root = ET.fromstring(response.content)
return prettify_func(root)
def prettify_by_str(xml_str, prettify_func):
root = ET.fromstring(xml_str)
return prettify_func(root)
def make_pretty_xml(root):
rough_string = ET.tostring(root, "utf-8")
reparsed = minidom.parseString(rough_string)
xml = reparsed.toprettyxml(indent="\t")
return xml
def prettify(response):
if isinstance(response, str) or isinstance(response, bytes):
return prettify_by_str(response, make_pretty_xml)
else:
return prettify_by_response(response, make_pretty_xml)
In prettify_by_response and prettify_by_str functions, I pass function make_pretty_xml as an argument
Instead of passing function as an argument, I can simply call that function.e.g
def prettify_by_str(xml_str, prettify_func):
root = ET.fromstring(xml_str)
return make_pretty_xml(root)
One of the advantage that passing function as an argument to these function over calling that function directly is, this function is not tightly couple to make_pretty_xml function.
What would be other advantages or Am I adding additional complexity?
This seem very open to biased answers I'll try to be impartial but I can't make any promise.
First, high order functions are functions that receive, and/or return functions. The advantages are questionable, I'll try to enumerate the usage of HoF and elucidate the goods and bads of each one
Callbacks
Callbacks came as a solution to blocking calls. I need B to happens after A so I call something that blocks on A and then calls B. This naturally leads to questions like, Hmm, my system wastes a lot of time waiting for things to happen. What if instead of waiting I can get what I need to be done passed as an argument. As anything new in technology that wasn't scaled yet seems a good idea until is scaled.
Callbacks are very common on the event system. If you every code in javascript you know what I'm talking about.
Algorithm abstraction
Some designs, mostly the behavioral ones can make use of HoF to choose some algorithm at runtime. You can have a high-level algorithm that receives functions that deal with low-level stuff. This lead to more abstraction code reuse and portable code. Here, portable means that you can write code to deal with new low levels without changing the high-level ones. This is not related to HoF but can make use of them for great help.
Attaching behavior to another function
The idea here is taking a function as an argument and returning a function that does exactly what the argument function does, plus, some attached behavior. And this is where (I think) HoF really shines.
Python decorators are a perfect example. They take a function as an argument and return another function. This function is attached to the same identifier of the first function
#foo
def bar(*args):
...
is the same of
def bar(*args):
...
bar = foo(bar)
Now, reflect on this code
from functools import lru_cache
#lru_cache(maxsize=None)
def fib(n):
if n < 2:
return n
return fib(n-1) + fib(n-2)
fib is just a Fibonacci function. It calculates the Fibonacci number up to n. Now lru_cache attach a new behavior, of caching results for already previously calculated values. The logic inside fib function is not tainted by LRU cache logic. What a beautiful piece of abstraction we have here.
Applicative style programming or point-free programming
The idea here is to remove variables, or points and combining function applications to express algorithms. I'm sure there are lots of people better than me in this subject wandering SO.
As a side note, this is not a very common style in python.
for i in it:
func(i)
from functools import partial
mapped_it = map(func, it)
In the second example, we removed the i variable. This is common in the parsing world. As another side node, map function is lazy in python, so the second example doesn't have effect until if you iterate over mapped_it
Your case
In your case, you are returning the value of the callback call. In fact, you don't need the callback, you can simply line up the calls as you did, and for this case you don't need HoF.
I hope this helps, and that somebody can show better examples of applicative style :)
Regards
I know this may sound like a stupid question, especially to someone who knows python's nature, but I was just wondering, is there a way to know if an object "implements an interface" so as to say?
To give an example of what I want to say:
let's say I have this function:
def get_counts(sequence):
counts = {}
for x in sequence:
if x in counts:
counts[x] += 1
else:
counts[x] = 1
return counts
My question is: Is there a way to make sure that the object passed to the function is iterable? I know that in Java or C# I could do this by having the method accept any object that implements a specific interface, let's say (for example) iIterable like this: void get_counts(iIterable sequence)
My guess is that in Python I would have to employ preemptive introspection checks (in a decorator perhaps?) and throw a custom exception if the object doesn't have an __iter__ attribute). But is there a more pythonic way to do this?
Use polymorphism and duck-typing before isinstance() or interfaces
You generally define what you want to do with your objects, then either use polymorphism to adjust how each object responds to what you want to do, or you use duck typing; test if the object at hand can do the thing you want to do in the first place. This is the invocation versus introspection trade-off, conventional wisdom states that invocation is preferable over introspection, but in Python, duck-typing is preferred over isinstance testing.
So you need to work out why you need to filter on wether or not something is iterable in the first place; why do you need to know this? Just use a try: iter(object), except TypeError: # not iterable to test.
Or perhaps you just need to throw an exception if whatever that was passed was not an iterable, as that would signal an error.
ABCs
With duck-typing, you may find that you have to test for multiple methods, and thus a isinstance() test may look a better option. In such cases, using a Abstract Base Class (ABC) could also be an option; using an ABC let's you 'paint' several different classes as being the right type for a given operation, for example. Using a ABC let's you focus on the tasks that need to be performed rather than the specific implementations used; you can have a Paintable ABC, a Printable ABC, etc.
Zope interfaces and component architecture
If you find your application is using an awful lot of ABCs or you keep having to add polymorphic methods to your classes to deal with various different situations, the next step is to consider using a full-blown component architecture, such as the Zope Component Architecture (ZCA).
zope.interface interfaces are ABCs on steroids, especially when combined with the ZCA adapters. Interfaces document expected behaviour of a class:
if IFrobnarIterable.providedBy(yourobject):
# it'll support iteration and yield Frobnars.
but it also let's you look up adapters; instead of putting all the behaviours for every use of shapes in your classes, you implement adapters to provide polymorphic behaviours for specific use-cases. You can adapt your objects to be printable, or iterable, or exportable to XML:
class FrobnarsXMLExport(object):
adapts(IFrobnarIterable)
provides(IXMLExport)
def __init__(self, frobnariterator):
self.frobnars = frobnariterator
def export(self):
entries = []
for frobnar in self.frobnars:
entries.append(
u'<frobnar><width>{0}</width><height>{0}</height></frobnar>'.format(
frobnar.width, frobnar.height)
return u''.join(entries)
and your code merely has to look up adapters for each shape:
for obj in setofobjects:
self.result.append(IXMLExport(obj).export())
Python (since 2.6) has abstract base classes (aka virtual interfaces), which are more flexible than Java or C# interfaces. To check whether an object is iterable, use collections.Iterable:
if isinstance(obj, collections.Iterable):
...
However, if your else block would just raise an exception, then the most Python answer is: don't check! It's up to your caller to pass in an appropriate type; you just need to document that you're expecting an iterable object.
The Pythonic way is to use duck typing and, "ask forgiveness, not permission". This usually means performing an operation in a try block assuming it behaves the way you expect it to, then handling other cases in an except block.
I think this is the way that the community would recommend you do it:
import sys
def do_something(foo):
try:
for i in foo:
process(i)
except:
t, ex, tb = sys.exc_info()
if "is not iterable" in ex.message:
print "Is not iterable"
do_something(True)
Or, you could use something like zope.interface.
class foo(object):
def __init__(self,f):
self.f = f
def __call__(self,args_list):
def wrapped_f(args_list):
return [self.f(*args) for args in args_list]
return wrapped_f(args_list)
if __name__=='__main__':
class abc(object):
#foo
def f(a,b,c):
return a+b+c
a = range(5)
b = range(5)
c = range(5)
data = list(zip(a,b,c))
print(abc.f(data))
I wrote this a few years back. When you decorate any function f(X) with #foo it becomes
f(list of Xs).
What is this process called? What is it? What is its functional programming name?
Its not currying. I know simple map9(f,list of Xs) could have done it.
What are decorators/operation of decorating called mathematically?
There are two transformations performed on your original function:
it is converted from a function of three arguments to a function that takes a 3-tuple
conversion from a function of a 3-tuple to a function that takes a list of 3-tuples
First transformation
In Haskell, there is a function called uncurry, documented here. (This is a two-argument version; 3-, 4-, ... versions could be easily created, too).
Second transformation
Also in Haskell, there are sets of functions with lift in their names. Here's a page on the Haskell wiki about lifting. I think that page explains it better than I could:
Lifting is a concept which allows you to transform a function into a
corresponding function within another (usually more general) setting.
So in your case, you're lifting a function from operating on tuples to operating on a list of tuples.
Notes:
the OP asked for the mathematical name for decorators. I don't know what that would be, but I've heard that Haskell is supposed to be like executable mathematics, so I think Haskell's terminology is a good starting point. YMMV.
the OP asked for the FP name of these processes. Again, I don't know, but I assume that Haskell's terminology is acceptable.
Decorators just have special syntax, but there are no rules what decorators can return and no mathematical description. They can be any callable after all.
Your function is just a partially applied starmap:
from functools import partial
from itertools import starmap
def foo(f):
return partial(starmap, f)
In a functional language like Haskell, you would do this by partially applying the map function to a function which takes a tuple of arguments, resulting in a function which takes a list of argument tuples. As Jochen Ritzel pointed out in another answer, even in Python you can implement this pretty trivially using functools.partial.
Therefore I suppose this process is called "partial application of map", or some such thing. I'm not aware of any particular name for this special case.
They are simply called Decorators. What it does can be called function chaining or function annotation, but I looked around quite a bit and found no special functional/mathmatical name for this process besides those 2 (chaining/annotation).
PEP Index > PEP 318 -- Decorators for Functions and Methods
On the name 'Decorator'
There's been a number of complaints about the choice of the name
'decorator' for this feature. The major one is that the name is not
consistent with its use in the GoF book [11]. The name 'decorator'
probably owes more to its use in the compiler area -- a syntax tree is
walked and annotated. It's quite possible that a better name may turn
up.
Can you explain the concept stubbing out functions or classes taken from this article?
class Loaf:
pass
This class doesn't define any methods or attributes, but syntactically, there needs to be something in the definition, so you use pass. This is a Python reserved word that just means “move along, nothing to see here”. It's a statement that does nothing, and it's a good placeholder when you're stubbing out functions or classes.`
thank you
stubbing out functions or classes
This refers to writing classes or functions but not yet implementing them. For example, maybe I create a class:
class Foo(object):
def bar(self):
pass
def tank(self):
pass
I've stubbed out the functions because I haven't yet implemented them. However, I don't think this is a great plan. Instead, you should do:
class Foo(object):
def bar(self):
raise NotImplementedError
def tank(self):
raise NotImplementedError
That way if you accidentally call the method before it is implemented, you'll get an error then nothing happening.
A 'stub' is a placeholder class or function that doesn't do anything yet, but needs to be there so that the class or function in question is defined. The idea is that you can already use certain aspects of it (such as put it in a collection or pass it as a callback), even though you haven't written the implementation yet.
Stubbing is a useful technique in a number of scenarios, including:
Team development: Often, the lead programmer will provide class skeletons filled with method stubs and a comment describing what the method should do, leaving the actual implementation to other team members.
Iterative development: Stubbing allows for starting out with partial implementations; the code won't be complete yet, but it still compiles. Details are filled in over the course of later iterations.
Demonstrational purposes: If the content of a method or class isn't interesting for the purpose of the demonstration, it is often left out, leaving only stubs.
Note that you can stub functions like this:
def get_name(self) -> str : ...
def get_age(self) -> int : ...
(yes, this is valid python code !)
It can be useful to stub functions that are added dynamically to an object by a third party library and you want have typing hints.
Happens to me... once :-)
Ellipsis ... is preferable to pass for stubbing.
pass means "do nothing", whereas ... means "something should go here" - it's a placeholder for future code. The effect is the same but the meaning is different.
Stubbing is a technique in software development. After you have planned a module or class, for example by drawing it's UML diagram, you begin implementing it.
As you may have to implement a lot of methods and classes, you begin with stubs. This simply means that you only write the definition of a function down and leave the actual code for later. The advantage is that you won't forget methods and you can continue to think about your design while seeing it in code.
The reason for pass is that Python is indentation dependent and expects one or more indented statement after a colon (such as after class or function).
When you have no statements (as in the case of a stubbed out function or class), there still needs to be at least one indented statement, so you can use the special pass statement as a placeholder. You could just as easily put something with no effect like:
class Loaf:
True
and that is also fine (but less clear than using pass in my opinion).
For debugging, it is often useful to tell if a particular function is higher up on the call stack. For example, we often only want to run debugging code when a certain function called us.
One solution is to examine all of the stack entries higher up, but it this is in a function that is deep in the stack and repeatedly called, this leads to excessive overhead. The question is to find a method that allows us to determine if a particular function is higher up on the call stack in a way that is reasonably efficient.
Similar
Obtaining references to function objects on the execution stack from the frame object? - This question focuses on obtaining the function objects, rather than determining if we are in a particular function. Although the same techniques could be applied, they may end up being extremely inefficient.
Unless the function you're aiming for does something very special to mark "one instance of me is active on the stack" (IOW: if the function is pristine and untouchable and can't possibly be made aware of this peculiar need of yours), there is no conceivable alternative to walking frame by frame up the stack until you hit either the top (and the function is not there) or a stack frame for your function of interest. As several comments to the question indicate, it's extremely doubtful whether it's worth striving to optimize this. But, assuming for the sake of argument that it was worthwhile...:
Edit: the original answer (by the OP) had many defects, but some have since been fixed, so I'm editing to reflect the current situation and why certain aspects are important.
First of all, it's crucial to use try/except, or with, in the decorator, so that ANY exit from a function being monitored is properly accounted for, not just normal ones (as the original version of the OP's own answer did).
Second, every decorator should ensure it keeps the decorated function's __name__ and __doc__ intact -- that's what functools.wraps is for (there are other ways, but wraps makes it simplest).
Third, just as crucial as the first point, a set, which was the data structure originally chosen by the OP, is the wrong choice: a function can be on the stack several times (direct or indirect recursion). We clearly need a "multi-set" (also known as "bag"), a set-like structure which keeps track of "how many times" each item is present. In Python, the natural implementation of a multiset is as a dict mapping keys to counts, which in turn is most handily implemented as a collections.defaultdict(int).
Fourth, a general approach should be threadsafe (when that can be accomplished easily, at least;-). Fortunately, threading.local makes it trivial, when applicable -- and here, it should surely be (each stack having its own separate thread of calls).
Fifth, an interesting issue that has been broached in some comments (noticing how badly the offered decorators in some answers play with other decorators: the monitoring decorator appears to have to be the LAST (outermost) one, otherwise the checking breaks. This comes from the natural but unfortunate choice of using the function object itself as the key into the monitoring dict.
I propose to solve this by a different choice of key: make the decorator take a (string, say) identifier argument that must be unique (in each given thread) and use the identifier as the key into the monitoring dict. The code checking the stack must of course be aware of the identifier and use it as well.
At decorating time, the decorator can check for the uniqueness property (by using a separate set). The identifier may be left to default to the function name (so it's only explicitly required to keep the flexibility of monitoring homonymous functions in the same namespace); the uniqueness property may be explicitly renounced when several monitored functions are to be considered "the same" for monitoring purposes (this may be the case if a given def statement is meant to be executed multiple times in slightly different contexts to make several function objects that the programmers wants to consider "the same function" for monitoring purposes). Finally, it should be possible to optionally revert to the "function object as identifier" for those rare cases in which further decoration is KNOWN to be impossible (since in those cases it may be the handiest way to guarantee uniqueness).
So, putting these many considerations together, we could have (including a threadlocal_var utility function that will probably already be in a toolbox module of course;-) something like the following...:
import collections
import functools
import threading
threadlocal = threading.local()
def threadlocal_var(varname, factory, *a, **k):
v = getattr(threadlocal, varname, None)
if v is None:
v = factory(*a, **k)
setattr(threadlocal, varname, v)
return v
def monitoring(identifier=None, unique=True, use_function=False):
def inner(f):
assert (not use_function) or (identifier is None)
if identifier is None:
if use_function:
identifier = f
else:
identifier = f.__name__
if unique:
monitored = threadlocal_var('uniques', set)
if identifier in monitored:
raise ValueError('Duplicate monitoring identifier %r' % identifier)
monitored.add(identifier)
counts = threadlocal_var('counts', collections.defaultdict, int)
#functools.wraps(f)
def wrapper(*a, **k):
counts[identifier] += 1
try:
return f(*a, **k)
finally:
counts[identifier] -= 1
return wrapper
return inner
I have not tested this code, so it might contain some typo or the like, but I'm offering it because I hope it does cover all the important technical points I explained above.
Is it all worth it? Probably not, as previously explained. However, I think along the lines of "if it's worth doing at all, then it's worth doing right";-).
I don't really like this approach, but here's a fixed-up version of what you were doing:
from collections import defaultdict
import threading
functions_on_stack = threading.local()
def record_function_on_stack(f):
def wrapped(*args, **kwargs):
if not getattr(functions_on_stack, "stacks", None):
functions_on_stack.stacks = defaultdict(int)
functions_on_stack.stacks[wrapped] += 1
try:
result = f(*args, **kwargs)
finally:
functions_on_stack.stacks[wrapped] -= 1
if functions_on_stack.stacks[wrapped] == 0:
del functions_on_stack.stacks[wrapped]
return result
wrapped.orig_func = f
return wrapped
def function_is_on_stack(f):
return f in functions_on_stack.stacks
def nested():
if function_is_on_stack(test):
print "nested"
#record_function_on_stack
def test():
nested()
test()
This handles recursion, threading and exceptions.
I don't like this approach for two reasons:
It doesn't work if the function is decorated further: this must be the final decorator.
If you're using this for debugging, it means you have to edit code in two places to use it; one to add the decorator, and one to use it. It's much more convenient to just examine the stack, so you only have to edit code in the code you're debugging.
A better approach would be to examine the stack directly (possibly as a native extension for speed), and if possible, find a way to cache the results for the lifetime of the stack frame. (I'm not sure if that's possible without modifying the Python core, though.)