Define function as list element - python

>>> list=[None]
>>> def list[0](x,y):
File "<stdin>", line 1
def list[0](x,y):
^
SyntaxError: invalid syntax
How can I define a function as an element of a list?

Python's def isn't flexible enough to handle generic lvalues such as list[0]. The language only allows you to use an identifier as function name. Here are the relevant parts of the grammar rule for the def-statement:
funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite
funcname ::= identifier
Instead, you can use a series of assignment and definition statements:
s = [None]
def f(x, y):
return x + y
s[0] = f
As an alternative, you could also store a lambda expression directly in a list:
s = [lambda x,y : x+y]

def f(whatever):
do_stuff()
l[0] = f
The function definition syntax doesn't allow you to define a function directly into a data structure, but you can just create the function and then assign it wherever it needs to go.

def someFunctionA(x, y):
return x+y
def someFunctionB(x, y):
return x*y
someList = [someFunctionA, someFunctionB]
print someList[0](2, 3)
print someList[1](5, 5)

Allowing such a freedom would make parsing harder... for example the parenthesized expression
...(x, y, z=3)
can be either a parameter declaration (where 3 is the default for keyword parameter z) or a call (that is passing z keyword parameter value as 3).
If you want to allow a generic assignable expression in def you also need to allow
def foo(x, y, z=3)[3](x, y, z=3):
...
where the first parenthesized part has a different semantic meaning and syntax rules from the second part.
Writing a parser for this is annoying (basically because you need to process an arbitrary unbounded amount of source code without understanding it) and is what for example lead to the worst parsing rule in the whole universe I know (it's the dreaded most vexing parse of C++) that basically just gave up on trying to get a decent language by resigning to ambiguity.
Note that in many cases when it's harder for a program to do the parsing it's because of ambiguity that would make also harder for a human to understand it.
Python correctly values readability as very important.
Functions in Python are however first class objects so you can solve your problem easily enough:
def foo(...):
...
mylist[index] = foo
or, only if the function is a single expression, with
mylist[index] = lambda ... : ...
(but lambda is very limited, both because it's sort of "hated" in the Python community and also because it would create some annoyance at the syntax level because of the need of handling indentation inside parenthesis).
Note also that something that a few Python novices don't know is that you can use def even inside a function; for example:
def register_http():
def handle_http(connection):
...
global_register['http'] = handle_http
that will assign a function as element of a global map without polluting the global (module) namespace with its name. A local def can also create a closure by capturing local state variables (read-only in 2.x or even read/write in 3.x).
Note also that if you need some processing of a function may be decorators can be useful. For example by defining
def register(name):
def do_registering(f):
global_register[name] = f
return f
return do_registering
you can just use
#register('http')
def handle_http(connection):
...

Related

Why was Python decorator chaining designed to work backwards? What is the logic behind this order?

To start with, my question here is about the semantics and the logic behind why the Python language was designed like this in the case of chained decorators. Please notice the nuance how this is different from the question
How decorators chaining work?
Link: How decorators chaining work? It seems quite a number of other users had the same doubts, about the call order of chained Python decorators. It is not like I can't add a __call__ and see the order for myself. I get this, my point is, why was it designed to start from the bottom, when it comes to chained Python decorators?
E.g.
def first_func(func):
def inner():
x = func()
return x * x
return inner
def second_func(func):
def inner():
x = func()
return 2 * x
return inner
#first_func
#second_func
def num():
return 10
print(num())
Quoting the documentation on decorators:
The decorator syntax is merely syntactic sugar, the following two function definitions are semantically equivalent:
def f(arg):
...
f = staticmethod(f)
#staticmethod
def f(arg):
...
From this it follows that the decoration in
#a
#b
#c
def fun():
...
is equivalent to
fun = a(b(c(fun)))
IOW, it was designed like that because it's just syntactic sugar.
For proof, let's just decorate an existing function and not return a new one:
def dec1(f):
print(f"dec1: got {vars(f)}")
f.dec1 = True
return f
def dec2(f):
print(f"dec2: got {vars(f)}")
f.dec2 = True
return f
#dec1
#dec2
def foo():
pass
print(f"Fully decked out: {vars(foo)}")
prints out
dec2: got {}
dec1: got {'dec2': True}
Fully decked out: {'dec2': True, 'dec1': True}
TL;DR
g(f(x)) means applying f to x first, then applying g to the output.
Omit the parentheses, add # before and line break after each function name:
#g
#f
x
(Syntax only valid if x is the definition of a function/class.)
Abstract explanation
The reasoning behind this design decision becomes fairly obvious IMHO, if you remember what the decorator syntax - in its most abstract and general form - actually means. So I am going to try the abstract approach to explain this.
It is all about syntax
To be clear here, the distinguishing factor in the concept of the "decorator" is not the object underneath it (so to speak) nor the operation it performs. It is the special syntax and the restrictions for it. Thus, a decorator at its core is nothing more than feature of Python grammar.
The decorator syntax requires a target to be decorated. Initially (see PEP 318) the target could only be function definitions; later class definitions were also allowed to be decorated (see PEP 3129).
Minimal valid syntax
Syntactically, this is valid Python:
def f(): pass
#f
class Target: pass # or `def target: pass`
However, this will (perhaps unsuprisingly) cause a TypeError upon execution. As has been reiterated multiple times here and in other posts on this platform, the above is equivalent to this:
def f(): pass
class Target: pass
Target = f(Target)
Minimal working decorator
The TypeError stems from the fact that f lacks a positional argument. This is the obvious logical restriction imposed by what a decorator is supposed to do. Thus, to achieve not only syntactically valid code, but also have it run without errors, this is sufficient:
def f(x): pass
#f
class Target: pass
This is still not very useful, but it is enough for the most general form of a working decorator.
Decoration is just application of a function to the target and assigning the output to the target's name.
Chaining functions ⇒ Chaining decorators
We can ignore the target and what it is or does and focus only on the decorator. Since it merely stands for applying a function, the order of operations comes into play, as soon as we have more than one. What is the order of operation, when we chain functions?
def f(x): pass
def g(x): pass
class Target: pass
Target = g(f(Target))
Well, just like in the composition of purely mathematical functions, this implies that we apply f to Target first and then apply g to the result of f. Despite g appearing first (i.e. further left), it is not what is applied first.
Since stacking decorators is equivalent to nesting functions, it seems obvious to define the order of operation the same way. This time, we just skip the parentheses, add an # symbol in front of the function name and a line break after it.
def f(x): pass
def g(x): pass
#g
#f
class Target: pass
But, why though?
If after the explanation above (and reading the PEPs for historic background), the reasoning behind the order of operation is still not clear or still unintuitive, there is not really any good answer left, other than "because the devs thought it made sense, so get used to it".
PS
I thought I'd add a few things for additional context based on all the comments around your question.
Decoration vs. calling a decorated function
A source of confusion seems to be the distinction between what happens when applying the decorator versus calling the decorated function.
Notice that in my examples above I never actually called target itself (the class or function being decorated). Decoration is itself a function call. Adding #f above the target is calling the f and passing the target to it as the first positional argument.
A "decorated function" might not even be a function
The distinction is very important because nowhere does it say that a decorator actually needs to return a callable (function or class). f being just a function means it can return whatever it wants. This is again valid and working Python code:
def f(x): return 3.14
#f
def target(): return "foo"
try:
target()
except Exception as e:
print(repr(e))
print(target)
Output:
TypeError("'float' object is not callable")
3.14
Notice that the name target does not even refer to a function anymore. It just holds the 3.14 returned by the decorator. Thus, we cannot even call target. The entire function behind it is essentially lost immediately before it is even available to the global namespace. That is because f just completely ignores its first positional argument x.
Replacing a function
Expanding this further, if we want, we can have f return a function. Not doing that seems very strange, considering it is used to decorate a function. But it doesn't have to be related to the target at all. Again, this is fine:
def bar(): return "bar"
def f(x): return bar
#f
def target(): return "foo"
print(target())
print(target is bar)
Output:
bar
True
It comes down to convention
The way decorators are actually overwhelmingly used out in the wild, is in a way that still keeps a reference to the target being decorated around somewhere. In practice it can be as simple as this:
def f(x):
print(f"applied `f({x.__name__})`")
return
#f
def target(): return "foo"
Just running this piece of code outputs applied f(target). Again, notice that we don't call target here, we only called f. But now, the decorated function is still target, so we could add the call print(target()) at the bottom and that would output foo after the other output produced by f.
The fact that most decorators don't just throw away their target comes down to convention. You (as a developer) would not expect your function/class to simply be thrown away completely, when you use a decorator.
Decoration with wrapping
This is why real-life decorators typically either return the reference to the target at the end outright (like in the last example) or they return a different callable, but that callable itself calls the target, meaning a reference to the target is kept in that new callable's local namespace . These functions are what is usually referred to as wrappers:
def f(x):
print(f"applied `f({x.__name__})`")
def wrapper():
print(f"wrapper executing with {locals()=}")
return x()
return wrapper
#f
def target(): return "foo"
print(f"{target()=}")
print(f"{target.__name__=}")
Output:
applied `f(target)`
wrapper executing with locals()={'x': <function target at 0x7f1b2f78f250>}
target()='foo'
target.__name__='wrapper'
As you can see, what the decorator left us is wrapper, not what we originally defined as target. And the wrapper is what we call, when we write target().
Wrapping wrappers
This is the kind of behavior we typically expect, when we use decorators. And therefore it is not surprising that multiple decorators stacked together behave the way they do. The are called from the inside out (as explained above) and each adds its own wrapper around what it receives from the one applied before:
def f(x):
print(f"applied `f({x.__name__})`")
def wrapper_from_f():
print(f"wrapper_from_f executing with {locals()=}")
return x()
return wrapper_from_f
def g(x):
print(f"applied `g({x.__name__})`")
def wrapper_from_g():
print(f"wrapper_from_g executing with {locals()=}")
return x()
return wrapper_from_g
#g
#f
def target(): return "foo"
print(f"{target()=}")
print(f"{target.__name__=}")
Output:
applied `f(target)`
applied `g(wrapper_from_f)`
wrapper_from_g executing with locals()={'x': <function f.<locals>.wrapper_from_f at 0x7fbfc8d64f70>}
wrapper_from_f executing with locals()={'x': <function target at 0x7fbfc8d65630>}
target()='foo'
target.__name__='wrapper_from_g'
This shows very clearly the difference between the order in which the decorators are called and the order in which the wrapped/wrapping functions are called.
After the decoration is done, we are left with wrapper_from_g, which is referenced by our target name in global namespace. When we call it, wrapper_from_g executes and calls wrapper_from_f, which in turn calls the original target.

Having a hard time understanding nested functions

python newbie here, I'm currently learning about nested functions in python. I'm having a particularly hard time understanding code from the example below. Particularly, at the bottom of the script, when you print echo(2)("hello") - how does the inner_function know to take that string "hello" as its argument input? in my head, I'd think you would have to pass the string as some sort of input to the outer function (echo)? Simply placing the string in brackets adjacent to the call of the outer function just somehow works? I can't seem to wrap my head around this..
-aspiring pythonista
# Define echo
def echo(n):
"""Return the inner_echo function."""
# Define inner_echo
def inner_echo(word1):
"""Concatenate n copies of word1."""
echo_word = word1 * n
return echo_word
# Return inner_echo
return inner_echo
# Call twice() and thrice() then print
print(echo(2)('hello'), echo(3)('hello'))
The important thing here is that in Python, functions themselves are objects, too. Functions can return any type of object, so functions can in principle also return functions. And this is what echo does.
So, the output of your function call echo(2) is again a function and echo(2)("hello") evaluates that function - with "hello" as an input argument.
Maybe it is easier to understand that concept if you would split that call into two lines:
my_function_object = echo(2) # creates a new function
my_function_object("hello") # call that new function
EDIT
Perhaps this makes it clearer: If you spell out a function name without the brackets you are dealing with the function as an object. For example,
x = numpy.sqrt(4) # x is a number
y = numpy.sqrt # y is a function object
z = y(4) # z is a number
Next, if you look at the statement return echo_word in the echo function, you will notice that what is returned is the inner function (without any brackets). So it is a function object that is returned by echo. You can check that also with print(echo(2))

Keywords in python that evaluate to themselves

There's a notion of keywords in Clojure where you define them by adding a colon in front of your word you are trying to address as a keyword. Also, it evaluates to itself. For example:
:my-keyword
;=> :my-keyword
Is there any way to implement this in python by defining some custom class or any workarounds?
The reason for having this is to have more self-desctriptive parameters (strings are there, but one cannot keep track of having consistent strings while passing around).
A practical use case for this goes something like this:
def area(polygon_type):
return \
{
"square": lambda side: (side * side),
"triangle": lambda base, height: (0.5 * base * height)
}[polygon_type]
area("square")(2) # ==> 4
But, having strings in such manner leads to error at runtime, if mishandled. But having something like keywords even an auto-complete feature in any IDE suggests the mistake that has been made while passing in the polygon_type.
area("Sqaure")(2) # would lead to a KeyError
Is there some feature in python that solves this type of problem, that I am unaware of?
If not, how'd someone go about tackling this?
Edit:
I am not trying to solve the problem of having such a function in particular; but instead looking for a way of implementing keyword concept in python. As, with enums I have to bundle up and explicitly define them under some category (In this case polygon_type)
Keywords in Clojure are interned strings and Clojure provides special syntactic support for them. I suggest you take a look at how they are implemented. It seems like Python does some interning of its strings but I don't know much of its details.
The point of using keyword is fast comparisons and map lookup. Although I am not sure how you would benefit from it, you could try to implement your own keyword-like objects in Python using string interning, something like this:
str2kwd = {}
class Keyword:
def __init__(self, s):
self.s = s
def __repr__(self):
return str(self)
def __str__(self):
return ":" + self.s
def kwd(s):
"""Construct a keyword"""
k = str2kwd.get(s)
if k is None:
k = Keyword(s)
str2kwd[s] = k
return k
Whenever you want to construct a keyword, you call the kwd function. For the Keyword class, we rely on the default equality and hash methods. Then you could use it like this:
>>> kwd("a")
:a
>>> kwd("a") == kwd("a")
True
>>> kwd("b") == kwd("a")
False
>>> kwd_a = kwd("a")
>>> kwd_b = kwd("b")
>>> {kwd_a: 3, kwd_b: 4}
{:a: 3, :b: 4}
>>> {kwd_a: 3, kwd_b: 4}[kwd_a]
3
However, I have not measured if this results in faster comparisons and map-lookups than just using regular Python strings, which is probably the most idiomatic choice for Python anyway. I doubt you would see a significant improvement in performance from using this home-made keyword class. Also note that it is best to call the kwd function at the top-level of the module and assign it to a variable that you use, instead of calling kwd everytime you need a keyword. Obviously, you will not have the special keyword syntax as in Clojure.
UPDATE: How to avoid misspelling bugs
If you are worried about misspelling keys in your map, you can assign the keys to local variables and use those local variables instead of the key values directly. This way, if you misspell a local variable name you will likely get an error much sooner because you are referring to a local variable that does not exist.
>>> kwd_square = "square"
>>> kwd_triangle = "triangle"
>>> m = {kwd_square: 3, kwd_triangle: 4}
>>> m[kwd_square]
3
>>> m[Square]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'Square' is not defined

How to stack multiple calls? [duplicate]

I'm trying to create a function that chains results from multiple arguments.
def hi(string):
print(string)<p>
return hi
Calling hi("Hello")("World") works and becomes Hello \n World as expected.
the problem is when I want to append the result as a single string, but
return string + hi produces an error since hi is a function.
I've tried using __str__ and __repr__ to change how hi behaves when it has not input. But this only creates a different problem elsewhere.
hi("Hello")("World") = "Hello"("World") -> Naturally produces an error.
I understand why the program cannot solve it, but I cannot find a solution to it.
You're running into difficulty here because the result of each call to the function must itself be callable (so you can chain another function call), while at the same time also being a legitimate string (in case you don't chain another function call and just use the return value as-is).
Fortunately Python has you covered: any type can be made to be callable like a function by defining a __call__ method on it. Built-in types like str don't have such a method, but you can define a subclass of str that does.
class hi(str):
def __call__(self, string):
return hi(self + '\n' + string)
This isn't very pretty and is sorta fragile (i.e. you will end up with regular str objects when you do almost any operation with your special string, unless you override all methods of str to return hi instances instead) and so isn't considered very Pythonic.
In this particular case it wouldn't much matter if you end up with regular str instances when you start using the result, because at that point you're done chaining function calls, or should be in any sane world. However, this is often an issue in the general case where you're adding functionality to a built-in type via subclassing.
To a first approximation, the question in your title can be answered similarly:
class add(int): # could also subclass float
def __call__(self, value):
return add(self + value)
To really do add() right, though, you want to be able to return a callable subclass of the result type, whatever type it may be; it could be something besides int or float. Rather than trying to catalog these types and manually write the necessary subclasses, we can dynamically create them based on the result type. Here's a quick-and-dirty version:
class AddMixIn(object):
def __call__(self, value):
return add(self + value)
def add(value, _classes={}):
t = type(value)
if t not in _classes:
_classes[t] = type("add_" + t.__name__, (t, AddMixIn), {})
return _classes[t](value)
Happily, this implementation works fine for strings, since they can be concatenated using +.
Once you've started down this path, you'll probably want to do this for other operations too. It's a drag copying and pasting basically the same code for every operation, so let's write a function that writes the functions for you! Just specify a function that actually does the work, i.e., takes two values and does something to them, and it gives you back a function that does all the class munging for you. You can specify the operation with a lambda (anonymous function) or a predefined function, such as one from the operator module. Since it's a function that takes a function and returns a function (well, a callable object), it can also be used as a decorator!
def chainable(operation):
class CallMixIn(object):
def __call__(self, value):
return do(operation(self, value))
def do(value, _classes={}):
t = type(value)
if t not in _classes:
_classes[t] = type(t.__name__, (t, CallMixIn), {})
return _classes[t](value)
return do
add = chainable(lambda a, b: a + b)
# or...
import operator
add = chainable(operator.add)
# or as a decorator...
#chainable
def add(a, b): return a + b
In the end it's still not very pretty and is still sorta fragile and still wouldn't be considered very Pythonic.
If you're willing to use an additional (empty) call to signal the end of the chain, things get a lot simpler, because you just need to return functions until you're called with no argument:
def add(x):
return lambda y=None: x if y is None else add(x+y)
You call it like this:
add(3)(4)(5)() # 12
You are getting into some deep, Haskell-style, type-theoretical issues by having hi return a reference to itself. Instead, just accept multiple arguments and concatenate them in the function.
def hi(*args):
return "\n".join(args)
Some example usages:
print(hi("Hello", "World"))
print("Hello\n" + hi("World"))

Can I use a decorator to mutate the local scope of a function in Python?

Is there any way of writing a decorator such that the following would work?
assert 'z' not in globals()
#my_decorator
def func(x, y):
print z
EDIT: moved from anwser
In answer to hop's "why?": syntax sugar / DRY.
It's not about caching, it's about calculating z (and z1, z2, z3, ...) based upon the values of x & y.
I have lots of functions which do related things, and I don't want to do have to write
z1, z2, z3=calculate_from(x, y)
at the beginning of every single function - I'll get it wrong somewhere. If this were c I'd do this with cpp (if this were lisp, I'd do this with macros ...), but I wanted to see if decorators could do the same thing.
If it helps, I'd almost certainly call the decorator "precalculate_z", and it certainly wouldn't be part of any public API.
I could probably get a similar effect from using the class infrastructure as well, but I wanted to see if it was doable with raw functions.
Echoing Hop's answer
Don't do it.
Seriously, don't do this. Lisp and Ruby are more appropriate languages for writing your own custom syntax. Use one of those. Or find a cleaner way to do this
If you must, you want dynamic scoped variables, not lexically scoped.
Python doesn't have dynamically scoped variables, but you can simulate it. Here's an example that simulates it by creating a global binding, but restores the previous value on exit:
http://codepad.org/6vAY8Leh
def adds_dynamic_z_decorator(f):
def replacement(*arg,**karg):
# create a new 'z' binding in globals, saving previous
if 'z' in globals():
oldZ = (globals()['z'],)
else:
oldZ = None
try:
globals()['z'] = None
#invoke the original function
res = f(*arg, **karg)
finally:
#restore any old bindings
if oldZ:
globals()['z'] = oldZ[0]
else:
del(globals()['z'])
return res
return replacement
#adds_dynamic_z_decorator
def func(x,y):
print z
def other_recurse(x):
global z
print 'x=%s, z=%s' %(x,z)
recurse(x+1)
print 'x=%s, z=%s' %(x,z)
#adds_dynamic_z_decorator
def recurse(x=0):
global z
z = x
if x < 3:
other_recurse(x)
print 'calling func(1,2)'
func(1,2)
print 'calling recurse()'
recurse()
I make no warranties on the utility or sanity of the above code. Actually, I warrant that it is insane, and you should avoid using it unless you want a flogging from your Python peers.
This code is similar to both eduffy's and John Montgomery's code, but ensures that 'z' is created and properly restored "like" a local variable would be -- for instance, note how 'other_recurse' is able to see the binding for 'z' specified in the body of 'recurse'.
I don't know about the local scope, but you could provide an alternative global name space temporarily. Something like:
import types
def my_decorator(fn):
def decorated(*args,**kw):
my_globals={}
my_globals.update(globals())
my_globals['z']='value of z'
call_fn=types.FunctionType(fn.func_code,my_globals)
return call_fn(*args,**kw)
return decorated
#my_decorator
def func(x, y):
print z
func(0,1)
Which should print "value of z"
a) don't do it.
b) seriously, why would you do that?
c) you could declare z as global within your decorator, so z will not be in globals() until after the decorator has been called for the first time, so the assert won't bark.
d) why???
I'll first echo the "please don't", but that's your choice. Here's a solution for you:
assert 'z' not in globals ()
class my_dec:
def __init__ (self, f):
self.f = f
def __call__ (self,x,y):
z = x+y
self.f(x,y,z)
#my_dec
def func (x,y,z):
print z
func (1,3)
It does require z in the formal parameters, but not the actual.
I could probably get a similar effect from using the class infrastructure as well, but I wanted to see if it was doable with raw functions.
Well, Python is an object-oriented language. You should do this in a class, in my opinion. Making a nice class interface would surely simplify your problem. This isn't what decorators were made for.
Explicit is better than implicit.
Is this good enough?
def provide_value(f):
f.foo = "Bar"
return f
#provide_value
def g(x):
print g.foo
(If you really want evil, assigning to f.func_globals seems fun.)
Others have given a few ways of making a working decorator, many have advised against doing so because it's so stylistically different from normal python behavior that it'll really confuse anyone trying to understand the code.
If you're needing to recalculate things a lot, would it make sense to group them together in an object? Compute z1...zN in the constructor, then the functions that use these values can access the pre-computed answers as part of the instance.

Categories

Resources