I was trying to create some code that can solve problems with sums.
For example, if you want to take the sum of 4*i for all values of i from 3, 109, I want this code to be able to do that. However, it should also be able to deal with more complicated things than just multiplication. See a sample of what I want to do below
from typing import Callable
class MyClass:
def __init__(self):
pass
def function_sum(self, lower_bound: int, upper_bound: int, function: Callable, *args):
return sum((function(*args) for i in range(lower_bound, upper_bound+1)))
print(MyClass().function_sum(1, 10, lambda x: x*i, 1))
Is there a way to use the iterable i which is used inside the function, as a part of the function in the parameter, without forcing i to be a parameter?
from typing import Callable
class MyClass:
def __init__(self):
pass
def function_sum(self, lower_bound: int, upper_bound: int, func: Callable, *args):
# i is forced to be a parameter in function
return sum((func(*args, i) for i in range(lower_bound, upper_bound+1)))
print(MyClass().function_sum(1, 10, lambda x: x*i, 1))
There's no reasonable way to do this; functions bind their scope at definition time, based on where they're defined, not called. Python doesn't directly support swapping out scopes like that.
You typically don't want to do this anyway, as it's a truly terrible design. Think about it: Even if this works, it requires the caller of the function to have detailed knowledge of the names and types of (some) local variables in the callee. What if the callee changed the name of the iteration variable from i to j (maybe i was being used for something else)? Now every caller that relied on it has to change. It ends up ridiculously tightly coupled (a bad thing) to no real benefit, since, if the function being called imposes the proper requirements on the callable passed, it can safely pass the argument by itself anyway (as your fixed code would demonstrate, if the lambda were properly defined as lambda x,i: x*i).
I'd suggest having the func only take i as a parameter, and skip the *args thing entirely in favor of just binding that information inside func:
from typing import Callable
def function_sum(lower_bound: int, upper_bound: int, func: Callable[[int], int]) -> int:
return sum(func(i) for i in range(lower_bound, upper_bound+1))
print(function_sum(1, 10, lambda i: 4*i))
In general, any value that a function needs to get from its caller should be passed into it as a parameter; this is the reason that parameters exist.
The x value is not being supplied by func's caller (function_sum), though, it's being supplied at the point where func is defined -- so that's the value that doesn't need to be a parameter, and can simply be defined as part of the function body (in this case 4*i, as opposed to making x a parameter, having the body be x*i, and then passing 4 into function_sum to in turn pass to func).
Related
I have a piece of code that generates a function from many smaller functions while making the outermost one accept an argument x.
In other words, I have an input x and I need to do various transformations to x that are decided at runtime.
This is done by iteratively calling this function (it essentially wraps a function in another function).
Here is the function:
def build_layer(curr_layer: typing.Callable, prev_layer: Union[typing.Callable, int]) -> typing.Callable:
def _function(x):
return curr_layer(prev_layer(x) if callable(prev_layer) else x)
return _function
Sidenote: as you can see if prev_layer is not callable it gets substituted for input x so I am using dummy integers to indicate where input goes.
The problem: this code cannot be pickled.
I do not seem to be able to figure out a way to rewrite this code in such a way to be pickleable.
Note: I need this object to be persisted on disk, but also its used in multiprocessing where its pickled for IPC (these functions are not used there, so technically they could be moved)
I have also a more complex version of this function that handles multiple inputs (using fixed aggregation function, in this case torch.cat) I know these two can be merged into one generic function and I will do that once I get it to work.
Here is the code for the second function:
def build_layer_multi_input(curr_layer: typing.Callable, prev_layers: list) -> typing.Callable:
def _function(x):
return curr_layer(torch.cat([layer(x) if callable(layer) else x for layer in prev_layers]))
return _function
I resolved this by attaching the return value of these functions to a class instance as described in this thread.
I wanted to do something like:
class foo():
def __init__():
...
def bar(self, param=set()):
...
where bar call itself using recursion, passing set. But after implementing following snippet my script started to do strange things. So I decided, that I will change it to the following form:
class foo():
def __init__():
...
def bar(self, param):
if param is None:
param=set()
After this change everything works as intended. Do anyone now, why the origin form does not work?
This is a documented thing, from Docs
Default parameter values are evaluated from left to right when the function definition is executed. This means that the expression is evaluated once, when the function is defined, and that the same “pre-computed” value is used for each call. This is especially important to understand when a default parameter is a mutable object, such as a list or a dictionary: if the function modifies the object (e.g. by appending an item to a list), the default value is in effect modified. This is generally not what was intended. A way around this is to use None as the default, and explicitly test for it in the body of the function.
So, it says the second method, explicitly test for the parameters if None and then assign the empty set is the best way to avoid the problem.
In python the method parameter types need not be specified. Python dynamically interprets them.
But in some code snippets I see the type being defined.
def method(p1: int, p2: int) -> None
1) Why is this done
2) for other data structures i need to define only the data structure and not type of the parameters it is accepting
def multiply(num1: list, num2: list):
Why is the purpose of such a design.
and why i am not allowed to define the type of the list
def multiply(num1: list[int], num2: list[int]):
For the execution of the script, type annotations are irrelevant (they are discarded by the interpreter).
But they are generally a good thing for code clarity. You can immediately see what types of parameters or return type a particular function/method is accepting or returning. Also, IDEs (PyCharm for example) usually highlight or warns when you try to pass different type to an annotated function, and can more easily infer types of variable you assign the result of annotated function to.
That being said, as interpreter doesn't care for type annotations, it will not stop you from passing a wrong type of parameter to a function (different than annotated). The same applies when you declare for example return type of a function to be a number but in the body of function you return a string. IDE can highlight it, but interpreter doesn't care.
From Documentation:
A type alias is defined by assigning the type to the alias. In this example, Vector and List[float] will be treated as interchangeable synonyms:
from typing import List
Vector = List[float]
def scale(scalar: float, vector: Vector) -> Vector:
return [scalar * num for num in vector]
# typechecks; a list of floats qualifies as a Vector.
new_vector = scale(2.0, [1.0, -4.2, 5.4])
So the point is you have to from typing import List, and then you can use List[int] or whatever type you want
But as for what is the purpose of this, it prevents many bugs specially when several people are working on a code base and want to use each other's functions. Or when you come back to a project after some time and don't remember anything
More explanation:
The type hinting is solely for making the code more readable and understandable for human eye. AFAIK the List or any other type defined in typing may have not even implemented any logic or data structure.
Type hinting is very useful to make sure is the received parameter as you expected ..
it's like a contract or a policy describes how your function want to deal with the parameters .. and typically the most case of it is with OOP.
Imagine that your function is depending on an other object and it knows that object contains a method called foo
class MyObject:
def foo():
print("called")
def my_method(obj: MyObject):
obj.foo()
we're calling a foo method with confidence because we only accepts an object instantiated from MyObject class that we know for sure contains a foo method
In Python I can pass a function as a parameter to another function, i.e.:
inspect.getmembers(random, callable)
It would get me all the callable members of a random variable. And callable is a function passed to perform the check (only members of variable satisfying the check would be returned).
My particular question is how to get all not callable arguments, and more broadly is there a way to pass the "reverse" of a function as an argument?
I have tried this:
inspect.getmembers(random, !callable)
inspect.getmembers(random, not callable)
And the first is a syntax error, while the second does not work.
As a workaround I have defined my own function:
def uncallable(object)
return not callable(object)
And so this works:
inspect.getmembers(random, uncallable)
But I wonder if there's an easier solution.
Just use a lambda:
inspect.getmembers(random, lambda x: not callable(x))
Just write your uncallable wrapper.
No language I am aware of allows the kind of high level function arithmetic you expect.
There is no way to make such a concept work in general.
What is the resulting function of these: ?
! len
len + len
len + ! len
So what we're left with are explicitly written high order functions, akka functions that take functions as parameters and return other functions like this
def negate(func):
def wrapped(*args, **kwargs):
return not func(*args, **kwargs)
return wrapped
uncallable = negate(callable)
These is the same pattern as python decorators.
this is from the source code of csv2rec in matplotlib
how can this function work, if its only parameters are 'func, default'?
def with_default_value(func, default):
def newfunc(name, val):
if ismissing(name, val):
return default
else:
return func(val)
return newfunc
ismissing takes a name and a value and determines if the row should be masked in a numpy array.
func will either be str, int, float, or dateparser...it converts data. Maybe not important. I'm just wondering how it can get a 'name' and a 'value'
I'm a beginner. Thanks for any 2cents! I hope to get good enough to help others!
This with_default_value function is what's often referred to (imprecisely) as "a closure" (technically, the closure is rather the inner function that gets returned, here newfunc -- see e.g. here). More generically, with_default_value is a higher-order function ("HOF"): it takes a function (func) as an argument, it also returns a function (newfunc) as the result.
I've seen answers confusing this with the decorator concept and construct in Python, which is definitely not the case -- especially since you mention func as often being a built-in such as int. Decorators are also higher-order functions, but rather specific ones: ones which return a decorated, i.e. "enriched", version of their function argument (which must be the only argument -- "decorators with arguments" are obtained through one more level of function/closure nesting, not by giving the decorator HOF more than one argument), which gets reassigned to exactly the same name as that function argument (and so typically has the same signature -- using a decorator otherwise would be extremely peculiar, un-idiomatic, unreadable, etc).
So forget decorators, which have absolutely nothing to do with the case, and focus on the newfunc closure. A lexically nested function can refer to (though not rebind) all local variable names (including argument names, since arguments are local variables) of the enclosing function(s) -- that's why it's known as a closure: it's "closed over" these "free variables". Here, newfunc can refer to func and default -- and does.
Higher-order functions are a very natural thing in Python, especially since functions are first-class objects (so there's nothing special you need to do to pass them as arguments, return them as function values, or even storing them in lists or other containers, etc), and there's no namespace distinction between functions and other kinds of objects, no automatic calling of functions just because they're mentioned, etc, etc. (It's harder - a bit harder, or MUCH harder, depending - in other languages that do draw lots of distinctions of this sort). In Python, mentioning a function is just that -- a mention; the CALL only happens if and when the function object (referred to by name, or otherwise) is followed by parentheses.
That's about all there is to this example -- please do feel free to edit your question, comment here, etc, if there's some other specific aspect that you remain in doubt about!
Edit: so the OP commented courteously asking for more examples of "closure factories". Here's one -- imagine some abstract kind of GUI toolkit, and you're trying to do:
for i in range(len(buttons)):
buttons[i].onclick(lambda: mainwin.settitle("button %d click!" % i))
but this doesn't work right -- i within the lambda is late-bound, so by the time one button is clicked i's value is always going to be the index of the last button, no matter which one was clicked. There are various feasible solutions, but a closure factory's an elegant possibility:
def makeOnclick(message):
return lambda: mainwin.settitle(message)
for i in range(len(buttons)):
buttons[i].onclick(makeOnClick("button %d click!" % i))
Here, we're using the closure factory to tweak the binding time of variables!-) In one specific form or another, this is a pretty common use case for closure factories.
This is a Python decorator -- basically a function wrapper. (Read all about decorators in PEP 318 -- http://www.python.org/dev/peps/pep-0318/)
If you look through the code, you will probably find something like this:
def some_func(name, val):
# ...
some_func = with_default_value(some_func, 'the_default_value')
The intention of this decorator seems to supply a default value if either the name or val arguments are missing (presumably, if they are set to None).
As for why it works:
with_default_value returns a function object, which is basically going to be a copy of that nested newfunc, with the 'func' call and default value substited with whatever was passed to with_default_value.
If someone does 'foo = with_default_value(bar, 3)', the return value is basically going to be a new function:
def foo(name, val):
ifismissing(name, val):
return 3
else:
return bar(val)
so you can then take that return value, and call it.
This is a function that returns another function. name and value are the parameters of the returned function.