I'm spending today learning Ruby from a Python perspective. One thing I have completely failed to grapple with is an equivalent of decorators. To pare things down I'm trying to replicate a trivial Python decorator:
#! /usr/bin/env python
import math
def document(f):
def wrap(x):
print "I am going to square", x
f(x)
return wrap
#document
def square(x):
print math.pow(x, 2)
square(5)
Running this gives me:
I am going to square 5
25.0
So, I want to create a function square(x), but decorate it so it alerts me as to what it's going to square before it does it. Let's get rid of the sugar to make it more basic:
...
def square(x):
print math.pow(x, 2)
square = document(square)
...
So, how do I replicate this in Ruby? Here's my first attempt:
#! /usr/bin/env ruby
def document(f)
def wrap(x)
puts "I am going to square", x
f(x)
end
return wrap
end
def square(x)
puts x**2
end
square = document(square)
square(5)
Running this generates:
./ruby_decorate.rb:8:in `document': wrong number of arguments (0 for 1) (ArgumentError)
from ./ruby_decorate.rb:15:in `<main>'
Which I guess is because parentheses aren't mandatory and it's taking my return wrap as an attempt to return wrap(). I know of no way to refer to a function without calling it.
I've tried various other things, but nothing gets me far.
Here's another approach that eliminates the problem with conflicts between names of aliased methods (NOTE my other solution using modules for decoration is a good alternative too as it also avoids conflicts):
module Documenter
def document(func_name)
old_method = instance_method(func_name)
define_method(func_name) do |*args|
puts "about to call #{func_name}(#{args.join(', ')})"
old_method.bind(self).call(*args)
end
end
end
The above code works because the old_method local variable is kept alive in the new 'hello' method by fact of define_method block being a closure.
Ok, time for my attempt at an answer. I'm aiming here specifically at Pythoneers trying to reorganize their brains. Here's some heavily documented code that (approximately) does what I was originally trying to do:
Decorating instance methods
#! /usr/bin/env ruby
# First, understand that decoration is not 'built in'. You have to make
# your class aware of the concept of decoration. Let's make a module for this.
module Documenter
def document(func_name) # This is the function that will DO the decoration: given a function, it'll extend it to have 'documentation' functionality.
new_name_for_old_function = "#{func_name}_old".to_sym # We extend the old function by 'replacing' it - but to do that, we need to preserve the old one so we can still call it from the snazzy new function.
alias_method(new_name_for_old_function, func_name) # This function, alias_method(), does what it says on the tin - allows us to call either function name to do the same thing. So now we have TWO references to the OLD crappy function. Note that alias_method is NOT a built-in function, but is a method of Class - that's one reason we're doing this from a module.
define_method(func_name) do |*args| # Here we're writing a new method with the name func_name. Yes, that means we're REPLACING the old method.
puts "about to call #{func_name}(#{args.join(', ')})" # ... do whatever extended functionality you want here ...
send(new_name_for_old_function, *args) # This is the same as `self.send`. `self` here is an instance of your extended class. As we had TWO references to the original method, we still have one left over, so we can call it here.
end
end
end
class Squarer # Drop any idea of doing things outside of classes. Your method to decorate has to be in a class/instance rather than floating globally, because the afore-used functions alias_method and define_method are not global.
extend Documenter # We have to give our class the ability to document its functions. Note we EXTEND, not INCLUDE - this gives Squarer, which is an INSTANCE of Class, the class method document() - we would use `include` if we wanted to give INSTANCES of Squarer the method `document`. <http://blog.jayfields.com/2006/05/ruby-extend-and-include.html>
def square(x) # Define our crappy undocumented function.
puts x**2
end
document(:square) # this is the same as `self.document`. `self` here is the CLASS. Because we EXTENDED it, we have access to `document` from the class rather than an instance. `square()` is now jazzed up for every instance of Squarer.
def cube(x) # Yes, the Squarer class has got a bit to big for its boots
puts x**3
end
document(:cube)
end
# Now you can play with squarers all day long, blissfully unaware of its ability to `document` itself.
squarer = Squarer.new
squarer.square(5)
squarer.cube(5)
Still confused? I wouldn't be surprised; this has taken me almost a whole DAY. Some other things you should know:
The first thing, which is CRUCIAL, is to read this: http://www.softiesonrails.com/2007/8/15/ruby-101-methods-and-messages. When you call 'foo' in Ruby, what you're actually doing is sending a message to its owner: "please call your method 'foo'". You just can't get a direct hold on functions in Ruby in the way you can in Python; they're slippery and elusive. You can only see them as though shadows on a cave wall; you can only reference them through strings/symbols that happen to be their name. Try and think of every method call 'object.foo(args)' you do in Ruby as the equivalent of this in Python: 'object.getattribute('foo')(args)'.
Stop writing any function/method definitions outside of modules/classes.
Accept from the get-go that this learning experience is going to be brain-melting, and take your time. If Ruby isn't making sense, punch a wall, go make a cup of coffee, or take a night's sleep.
Decorating class methods
The above code decorates instance methods. What if you want to decorate methods directly on the class? If you read http://www.rubyfleebie.com/understanding-class-methods-in-ruby, you find there are three methods for creating class methods -- but only one of them works for us here.
That is the anonymous class << self technique. Let's do the above but so we can call square() and cube() without instantiating it:
class Squarer
class << self # class methods go in here
extend Documenter
def square(x)
puts x**2
end
document(:square)
def cube(x)
puts x**3
end
document(:cube)
end
end
Squarer.square(5)
Squarer.cube(5)
Have fun!
Python-like decorators can be implemented in Ruby. I won't try to explain and give examples, because Yehuda Katz has already published a good blog post about decorators DSL in Ruby, so I highly recommend to read it:
Python Decorators in Ruby
Source code and tests
UPDATE: I've got a couple of vote downs on this one, so let me explain further.
alias_method (and alias_method_chain) is NOT exactly the same concept as a decorator. It is just a way to re-define method implementation without using inheritance (so client code won't notice a difference, still using the same method call). It could be useful. But also it could be error-prone. Anyone who used Gettext library for Ruby probably noticed that its ActiveRecord integration has been broken with each Rails major upgrade, because aliased version has been following the semantics of an old method.
The purpose of a decorator in general is NOT to change the internals of any given method and still be able to call the original one from a modified version, but to enhance the function behavior. The "entry/exit" use case, which is somewhat close to alias_method_chain, is only a simple demonstration. Another, more useful kind of a decorator could be #login_required, which checks authorization, and only runs the function if authorization was successful, or #trace(arg1, arg2, arg3), which could perform a set of tracing procedures (and be called with different arguments for different methods decoration).
What you might achieve with decorators in Python, you achieve with blocks in Ruby. (I cannot believe how many answers are on this page, without a single yield statement!)
def wrap(x)
puts "I am going to square #{x}"
yield x
end
def square(x)
x**2
end
>> wrap(2) { |x| square(x) }
=> I am going to square 2
=> 4
The concept is similar. With the decorator in Python, you're essentially passing the function "square" to be called from within "wrap". With the block in Ruby, I'm passing not the function itself, but a block of code inside of which the function is invoked, and that block of code is executed within the context of "wrap", where the yield statement is.
Unlike with decorators, the Ruby block being passed doesn't need a function to be part of it. The above could have been simply:
def wrap(x)
puts "I am going to square #{x}"
yield x
end
>> wrap(4) { |x| x**2 }
=> I am going to square 4
=> 16
This is a slightly unusual question, but interesting. I'd first strongly recommend that you don't try and directly transfer your Python knowledge to Ruby - it's better to learn the idioms of Ruby and apply them directly, rather than try to transfer Python directly. I've used both languages a lot, and they're both best when following their own rules and conventions.
Having said all that, here's some nifty code that you can use.
def with_document func_name, *args
puts "about to call #{func_name}(#{args.to_s[1...-1]})"
method(func_name).call *args
end
def square x
puts x**2
end
def multiply a, b
puts a*b
end
with_document :square, 5
with_document :multiply, 5, 3
this produces
about to call square(5)
25
about to call multiply(5, 3)
15
which I'm sure you'll agree does the job.
IMO mooware has the best answer so far and it is the cleanest, simplest and most idiomatic. However he is making use of 'alias_method_chain' which is part of Rails, and not pure Ruby. Here is a rewrite using pure Ruby:
class Foo
def square(x)
puts x**2
end
alias_method :orig_square, :square
def square(x)
puts "I am going to square #{x}"
orig_square(x)
end
end
You can also accomplish the same thing using modules instead:
module Decorator
def square(x)
puts "I am going to square #{x}"
super
end
end
class Foo
def square(x)
puts x**2
end
end
# let's create an instance
foo = Foo.new
# let's decorate the 'square' method on the instance
foo.extend Decorator
# let's invoke the new decorated method
foo.square(5) #=> "I am going to square 5"
#=> 25
Michael Fairley demonstrated this at RailsConf 2012. Code is available here on Github. Simple usage examples:
class Math
extend MethodDecorators
+Memoized
def fib(n)
if n <= 1
1
else
fib(n - 1) * fib(n - 2)
end
end
end
# or using an instance of a Decorator to pass options
class ExternalService
extend MethodDecorators
+Retry.new(3)
def request
...
end
end
Your guess is right.
You best use alias to bind the original method to another name, and then define the new one to print something and call the old one. If you need to do this repeatedly, you can make a method that does this for any method (I had an example once, but cannot find it now).
PS: your code does not define a function within a function but another function on the same object (yes, this is an undocument feature of Ruby)
class A
def m
def n
end
end
end
defines both m and n on A.
NB: the way to refer to a function would be
A.method(:m)
Okay, found my code again that does decorators in Ruby. It uses alias to bind the original method to another name, and then define the new one to print something and call the old one. All this is done using eval, such that it can be reused like decorators in Python.
module Document
def document(symbol)
self.send :class_eval, """
alias :#{symbol}_old :#{symbol}
def #{symbol} *args
puts 'going to #{symbol} '+args.join(', ')
#{symbol}_old *args
end"""
end
end
class A
extend Document
def square(n)
puts n * n
end
def multiply(a,b)
puts a * b
end
document :square
document :multiply
end
a = A.new
a.square 5
a.multiply 3,4
Edit: here the same with a block (no string manipulation pain)
module Document
def document(symbol)
self.class_eval do
symbol_old = "#{symbol}_old".to_sym
alias_method symbol_old, symbol
define_method symbol do |*args|
puts "going to #{symbol} "+args.join(', ')
self.send symbol_old, *args
end
end
end
end
I believe the corresponding Ruby idiom would be the alias method chain, which is heavily used by Rails. This article also considers it as the Ruby-style decorator.
For your example it should look like this:
class Foo
def square(x)
puts x**2
end
def square_with_wrap(x)
puts "I am going to square", x
square_without_wrap(x)
end
alias_method_chain :square, :wrap
end
The alias_method_chain call renames square to square_without_wrap and makes square an alias for square_with_wrap.
I believe Ruby 1.8 doesn't have this method built-in, so you would have to copy it from Rails, but 1.9 should include it.
My Ruby-Skills have gotten a bit rusty, so I'm sorry if the code doesn't actually work, but I'm sure it demonstrates the concept.
In Ruby you can mimic Python's syntax for decorators like this:
def document
decorate_next_def {|name, to_decorate|
print "I am going to square", x
to_decorate
}
end
document
def square(x)
print math.pow(x, 2)
end
Though you need some lib for that. I've written here how to implement such functionality (when I was trying to find there something in Rython that is missing in Ruby).
Related
To implement prettified xml, I have written following code
def prettify_by_response(response, prettify_func):
root = ET.fromstring(response.content)
return prettify_func(root)
def prettify_by_str(xml_str, prettify_func):
root = ET.fromstring(xml_str)
return prettify_func(root)
def make_pretty_xml(root):
rough_string = ET.tostring(root, "utf-8")
reparsed = minidom.parseString(rough_string)
xml = reparsed.toprettyxml(indent="\t")
return xml
def prettify(response):
if isinstance(response, str) or isinstance(response, bytes):
return prettify_by_str(response, make_pretty_xml)
else:
return prettify_by_response(response, make_pretty_xml)
In prettify_by_response and prettify_by_str functions, I pass function make_pretty_xml as an argument
Instead of passing function as an argument, I can simply call that function.e.g
def prettify_by_str(xml_str, prettify_func):
root = ET.fromstring(xml_str)
return make_pretty_xml(root)
One of the advantage that passing function as an argument to these function over calling that function directly is, this function is not tightly couple to make_pretty_xml function.
What would be other advantages or Am I adding additional complexity?
This seem very open to biased answers I'll try to be impartial but I can't make any promise.
First, high order functions are functions that receive, and/or return functions. The advantages are questionable, I'll try to enumerate the usage of HoF and elucidate the goods and bads of each one
Callbacks
Callbacks came as a solution to blocking calls. I need B to happens after A so I call something that blocks on A and then calls B. This naturally leads to questions like, Hmm, my system wastes a lot of time waiting for things to happen. What if instead of waiting I can get what I need to be done passed as an argument. As anything new in technology that wasn't scaled yet seems a good idea until is scaled.
Callbacks are very common on the event system. If you every code in javascript you know what I'm talking about.
Algorithm abstraction
Some designs, mostly the behavioral ones can make use of HoF to choose some algorithm at runtime. You can have a high-level algorithm that receives functions that deal with low-level stuff. This lead to more abstraction code reuse and portable code. Here, portable means that you can write code to deal with new low levels without changing the high-level ones. This is not related to HoF but can make use of them for great help.
Attaching behavior to another function
The idea here is taking a function as an argument and returning a function that does exactly what the argument function does, plus, some attached behavior. And this is where (I think) HoF really shines.
Python decorators are a perfect example. They take a function as an argument and return another function. This function is attached to the same identifier of the first function
#foo
def bar(*args):
...
is the same of
def bar(*args):
...
bar = foo(bar)
Now, reflect on this code
from functools import lru_cache
#lru_cache(maxsize=None)
def fib(n):
if n < 2:
return n
return fib(n-1) + fib(n-2)
fib is just a Fibonacci function. It calculates the Fibonacci number up to n. Now lru_cache attach a new behavior, of caching results for already previously calculated values. The logic inside fib function is not tainted by LRU cache logic. What a beautiful piece of abstraction we have here.
Applicative style programming or point-free programming
The idea here is to remove variables, or points and combining function applications to express algorithms. I'm sure there are lots of people better than me in this subject wandering SO.
As a side note, this is not a very common style in python.
for i in it:
func(i)
from functools import partial
mapped_it = map(func, it)
In the second example, we removed the i variable. This is common in the parsing world. As another side node, map function is lazy in python, so the second example doesn't have effect until if you iterate over mapped_it
Your case
In your case, you are returning the value of the callback call. In fact, you don't need the callback, you can simply line up the calls as you did, and for this case you don't need HoF.
I hope this helps, and that somebody can show better examples of applicative style :)
Regards
I have a function with way to much going on in it so I've decided to split it up into smaller functions and call all my block functions inside a single function. --> e.g.
def main_function(self):
time_subtraction(self)
pay_calculation(self,todays_hours)
and -->
def time_subtraction(self):
todays_hours = datetime.combine(datetime(1,1,1,0,0,0), single_object2) - datetime.combine(datetime(1,1,1,0,0,0),single_object)
return todays_hours
So what im trying to accomplish here is to make todays_hours available to my main_function. I've read lots of documentation and other resources but apparently I'm still struggling with this aspect.
EDIT--
This is not a method of the class. Its just a file where i have a lot of functions coded and i import it where needed.
If you want to pass the return value of one function to another, you need to either nest the function calls:
pay_calculation(self, time_subtraction(self))
… or store the value so you can pass it:
hours = time_subtraction(self)
pay_calculation(self, hours)
As a side note, if these are methods in a class, you should be calling them as self.time_subtraction(), self.pay_calculation(hours), etc., not time_subtraction(self), etc. And if they aren't methods in a class, maybe they should be.
Often it makes sense for a function to take a Spam instance, and for a method of Spam to send self as the first argument, in which case this is all fine. But the fact that you've defined def time_subtraction(self): implies that's not what's going on here, and you're confused about methods vs. normal functions.
Say I have something like this sample code.
def foo(n):
def bar():
return -1
if n = 0:
return 0
else:
return foo(n+bar())
I assume it creates a new instance of bar every time foo is recursively called. However this seems like something that could be optimized in python (or any other language) but I was unable to find anything stating if it has been optimized for that.
The reasoning I have bar defined in foo is I'm trying to hide bar from the user and python's _bar() or __bar() of "please sir don't use this dear user" is annoying me as I was trained in non scripting languages.
def is an executable statement in Python (and so is class). A new function object for bar is created each time foo() is invoked, but it's pretty trivial cost. It just retrieves the compiled code object, and wraps it in a new function object. People stress over this waaaay too much ;-) In general, it's necessary to do this in order for closures to work right, and to capture appropriate default arguments. You're right that in many cases this could be optimized a bit, but the CPython implementation doesn't bother.
I am learning Python and am trying to figure out the best way to structure my code.
Lets say I have a long function, and want to break it up into smaller functions. In C, I would make it a 'static' function at the top level (since that is the only level of functions). I would also probably forward declare it and place it after the now-shortened function that uses it.
Now for Python. In Python, I have the option to create a nested function. Since this new "inner" function is really only a piece of the larger function broken off for readability purposes, and only used by it, it sounds like it should be a nested function, but having this function inside the parent function causes the whole function to still be very long, since no code was actually moved out of it! And especially since the functions have to be fully coded before they are called, it means the actual short function is all the way down at the end of this pseudo-long function, making readability terrible!
What is considered good practice for situations like this?
How about placing the smaller functions in an own file and import that in your main function? You'd have something like:
def main_func():
from impl import a, b, c
a()
b()
c()
I think this approach leads to high readability: You see where the smaller functions come from in case you want to look into them, importing them is a one-liner, and the implementation of the main function is directly visible. By choosing an appropriate file name / location, you can also tell the user that these functions are not intended for use outside of main_func (you don't have real information hiding in Python anyway).
By the way: This question doesn't have one correct answer.
As far as I know, the main advantage of inner functions in Python is that they inherit the scope of the enclosing function. So if you need access to variables in the main function's scope (eg. argument or local variable), an inner function is the way to go. Otherwise, do whatever you like and/or find most readable.
EDIT:
See this answer too.
So what I could understand is that you have a long function like:
def long_func(blah, foo, *args):
...
...
my_val = long_func(foo, blah, a, b, c)
What you have done is:
def long_func(blah, foo, *args):
def short_func1():
...
def short_func2():
...
...
short_func1()
short_func2()
...
...
my_val = long_func(foo, blah, a, b, c)
You have lots more options, I'll list two:
Make it into a class
class SomeName(object):
def __init__(self, blah, foo, *args):
self.blah = blah
self.foo = foo
self.args = args
self.result = None # Might keep this for returning values or see (2)
def short_func1(self):
...
def short_func2(self):
...
def run(self): # name it as you like!
self.short_func1()
self.short_func2()
return self.result # (2) or return the last call, on you
...
my_val = SomeName(foo, blah, a, b, c).run()
Make another module and put the short_funcs into it. Just like flyx has suggested.
def long_func(foo, blah, *args):
from my_module import short_func1, short_func2
short_func1(foo)
short_func2(blah)
The good practice is to keep cycomatic complexity low. This practically means breaking your long function into many smaller functions.
The complexity is measured by the number of if, while, do, for, ?:,
catch, switch, case statements, and operators && and || (plus one) in
the body of a constructor, method, static initializer, or instance
initializer. It is a measure of the minimum number of possible paths
through the source and therefore the number of required tests.
Generally 1-4 is considered good, 5-7 ok, 8-10 consider re-factoring,
and 11+ re-factor now !
I suggest to take this advice, coming from Sonar, a code quality analysis tool. A good way to refactor such code is using TDD. First write unit tests to cover all the execution paths of your current function. After that you can refactor with the peace of mind that the unit tests will guarantee you didn't break anything.
If on the other hand your long function is just long, but otherwise already has a low cyclomatic complexity, then I think it doesn't matter much whether the function is nested or not.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Python dynamic function creation with custom names
I have written a little script to determine whether what I wanted to do is possible. It is.
My goal is to dynamically (at runtime) create functions (or methods) which have names based on a list of arbitrary size (size of the list = number of functions dynamically created).
All the functions do the same (for now), they just print their arguments.
The following code does exactly what I want, BUT, it is not clean and very brute-force. I'm trying to figure out if there is a better way to do this.
class Binder:
def __init__(self, test_cases):
""""
test_cases: a list of function/method names.
length of test_case = number of methods created.
"""
for test_case in test_cases:
#construct a code string for creating a new method using "exec"
func_str = "def "
func_str += test_case
func_str += "(*args):"
func_str += "\n\t"
func_str += "for arg in args:"
func_str += "\n\t\t"
func_str += "print arg"
func_str += "\n"
"""
For example, func_str for test_cases[0]= "func1" is simply:
def func1(*args):
for arg in args:
print arg
"""
#use exec to define the function
exec(func_str)
#add the function as a method to this class
# for test_cases[0] = "func1", this is: self.func1 = func1
set_self = "self." + test_case + " = " + test_case
exec(set_self)
if __name__ == '__main__':
#this list holds the names of the new functions to be created
test_cases = ["func1", "func2", "func3", "func4"]
b = Binder(test_cases)
#simply call each function as the instant's attributes
b.func1(1)
b.func2(1, 3, 5)
b.func4(10)
Output is:
1
1
3
5
10
As expected.
update the content of the function would not simply be a for loop printing the args, it would do something more meaningful. I get the exact result I want from the piece of code above, just wondering if there is a better way of doing it.
update I'm tying two ends of a much bigger module. One end determines what the test cases are and among other things, populates a list of the test cases' names. The other end is the functions themselves, which must have 1:1 mapping with the name of the test case. So I have the name of the test cases, I know what I want to do with each test case, I just need to create the functions that have the name of the test cases. Since the name of the test cases are determined at runtime, the function creation based on those test cases must be at runtime as well. The number of test cases is also determined at runtime.
Is there a better way to do this??
Any and all suggestions welcome.
Thanks in advance.
Mahdi
In Python this is the most reasonable approach for generic metaprogramming.
If you need just some constants in the code then however a closure may do the trick... for example:
def multiplier(k):
"Returns a function that multiplies the argument by k"
def f(x):
return x*k
return f
For arbitrary code generation Python has an ast module and you can in theory both inspect or create functions using that approach. However in Python code is hard to represent and manipulate that way so normally the approach is to just do everything at runtime instead of compiling specific functions. When you really can get an advantage you can use eval (to get a lambda) or exec. The ast module is used mostly just for inspection.
Writing code that generates code is what is called metaprogramming and Python is not very friendly to this approach.
You may have heard that C++ claims support for metaprogramming, but indeed it's only template-based metaprogramming where you can substitute types or constants into a fixed structure. You can play some tricks using "advanced metaprogramming" like the SFINAE rule but they're nothing more than trickery.
Python doesn't need templates because the language is not statically typed and you have closures (C++ is statically typed and there are no closures), but Python doesn't help with a general metaprogramming approach (i.e. writing general code that generates or manipulates code).
If you're interested in metaprogramming then the language of choice is probably Common Lisp. Writing code that writes code is not something special for Lisp and while it's of course more difficult to write a macro (a function that writes code) than a regular run-time function still with Lisp the difficulties are mostly essential (the problem is indeed harder) and not artificial because of language limitations.
There's an old joke among lispers that goes more or less like "I'm writing code that writes code that writes code that writes code that people pays me for". Metaprogramming is indeed just the first step, after that you have meta-meta-programming (writing code that writes code generators) and so on. Of course things gets harder and harder (but once again because the problem is harder, not because of arbitrarily imposed limitations...).
First of all, this is probably a really bad idea. To see why, read the comments and the other answers.
But to answer your question, this should work (although it's a hack and might have strange side effects):
from types import MethodType
def myfunc(self, *args):
for arg in args:
print arg
class Binder(object):
def __init__(self, test_cases):
for test_case in test_cases:
method = MethodType(myfunc, self, Binder)
setattr(self, test_case, method)
In this case, the reference to the function is hardcoded, but you can of course pass it as an argument too.
This is the output:
>>> b = Binder(['a', 'b'])
>>> b.a(1, 2, 3)
1
2
3
>>> b.b('a', 20, None)
a
20
None