I'm trying to create a function that chains results from multiple arguments.
def hi(string):
print(string)<p>
return hi
Calling hi("Hello")("World") works and becomes Hello \n World as expected.
the problem is when I want to append the result as a single string, but
return string + hi produces an error since hi is a function.
I've tried using __str__ and __repr__ to change how hi behaves when it has not input. But this only creates a different problem elsewhere.
hi("Hello")("World") = "Hello"("World") -> Naturally produces an error.
I understand why the program cannot solve it, but I cannot find a solution to it.
You're running into difficulty here because the result of each call to the function must itself be callable (so you can chain another function call), while at the same time also being a legitimate string (in case you don't chain another function call and just use the return value as-is).
Fortunately Python has you covered: any type can be made to be callable like a function by defining a __call__ method on it. Built-in types like str don't have such a method, but you can define a subclass of str that does.
class hi(str):
def __call__(self, string):
return hi(self + '\n' + string)
This isn't very pretty and is sorta fragile (i.e. you will end up with regular str objects when you do almost any operation with your special string, unless you override all methods of str to return hi instances instead) and so isn't considered very Pythonic.
In this particular case it wouldn't much matter if you end up with regular str instances when you start using the result, because at that point you're done chaining function calls, or should be in any sane world. However, this is often an issue in the general case where you're adding functionality to a built-in type via subclassing.
To a first approximation, the question in your title can be answered similarly:
class add(int): # could also subclass float
def __call__(self, value):
return add(self + value)
To really do add() right, though, you want to be able to return a callable subclass of the result type, whatever type it may be; it could be something besides int or float. Rather than trying to catalog these types and manually write the necessary subclasses, we can dynamically create them based on the result type. Here's a quick-and-dirty version:
class AddMixIn(object):
def __call__(self, value):
return add(self + value)
def add(value, _classes={}):
t = type(value)
if t not in _classes:
_classes[t] = type("add_" + t.__name__, (t, AddMixIn), {})
return _classes[t](value)
Happily, this implementation works fine for strings, since they can be concatenated using +.
Once you've started down this path, you'll probably want to do this for other operations too. It's a drag copying and pasting basically the same code for every operation, so let's write a function that writes the functions for you! Just specify a function that actually does the work, i.e., takes two values and does something to them, and it gives you back a function that does all the class munging for you. You can specify the operation with a lambda (anonymous function) or a predefined function, such as one from the operator module. Since it's a function that takes a function and returns a function (well, a callable object), it can also be used as a decorator!
def chainable(operation):
class CallMixIn(object):
def __call__(self, value):
return do(operation(self, value))
def do(value, _classes={}):
t = type(value)
if t not in _classes:
_classes[t] = type(t.__name__, (t, CallMixIn), {})
return _classes[t](value)
return do
add = chainable(lambda a, b: a + b)
# or...
import operator
add = chainable(operator.add)
# or as a decorator...
#chainable
def add(a, b): return a + b
In the end it's still not very pretty and is still sorta fragile and still wouldn't be considered very Pythonic.
If you're willing to use an additional (empty) call to signal the end of the chain, things get a lot simpler, because you just need to return functions until you're called with no argument:
def add(x):
return lambda y=None: x if y is None else add(x+y)
You call it like this:
add(3)(4)(5)() # 12
You are getting into some deep, Haskell-style, type-theoretical issues by having hi return a reference to itself. Instead, just accept multiple arguments and concatenate them in the function.
def hi(*args):
return "\n".join(args)
Some example usages:
print(hi("Hello", "World"))
print("Hello\n" + hi("World"))
Related
Consider having a function that returns a complex value:
def my_fn():
return (create_this(), create_that(), someotherstuff)
Assyming pylance knows what create_this() returns as well as what the other values are, it will implicitly tell you my_fn returns a Tuple[Type1, Type2, Type3].
Now let's say you have a function that expects to receive an argument that contains whatever this function returned, but you want to still get type hints. You can do this:
def process_fn_value(data: Tuple[Type1, Type2, Type3]):
...
But that's rather verbose, isn't it. It would be better to just write:
def process_fn_value(data: ReturnOf[my_fn]):
...
I have tried the following, hoping to extract the type from a function by making a generic type and then calling type() on it. But it doesn't even properly decode the type of the generic value:
T = TypeVar('T')
def RetVal(cb: Callable[[Any], T]):
return type(cb())
def test_fn():
return "test"
def test_consumer(arg: RetVal[test_fn]):
return arg
Another thing I tried, mostly after looking how Generic[T] is implemented:
class ReturnValue(Type[T], _root=True):
def __new__(func, cb: Callable[[], Generic[T]]) -> T:
return type(cb())
def test_fn():
return [1,2,3]
def test_consumer(arg: ReturnValue[test_fn]):
return arg
testtype = ReturnValue(test_fn)
None of these work.
Is there any such type hint in Python?
Note: If you think that this is a problem I shouldn't be facing if I wrote the code in such and such way, maybe you're right. But please consider sometimes one cannot change EVERYTHING and yet might be able to create at least partial improvement in the codebase.
I used naive approach to write a wrapper. Get all *args and **kwargs and pass them to the enclosing function. But something went wrong. So I simplified example to the core to illustrate my troubles.
# simplies wrapper possible: just pass the args
def wraps(f):
def call(*argv, **kw):
# add some meaningful manipulations later
return f(*argv, **kw)
return call
# check the wrapper behaves identically
class M:
def __init__(this, param):
this.param = param
M.__new__ = M.__new__
m1 = M(1)
M.__new__ = wraps(M.__new__)
m2 = M(2)
m1 was instantiated normally, but m2 fails with the following error description
TypeError: object.__new__() takes exactly one argument (the type to instantiate)
The question is how to define wraps and call function properly so they would behave identically to the function being wrapped regardless of the wrapped function.
It is not the end objective obviously, since primitive lambda x: x would suffice. It is a starting point from which I could introduce further complications.
The short answer: It's impossible. You could not define a perfect wrapper in python (and in many other languages too).
Slightly longer version. Python function is a first-class object and all manipulations acceptable for objects could be performed with a function too. So you could not presume that some complex procedure would limit itself with only calling the function passed as argument and would not use the function object in other unobvious ways
Much more verbose speculation with examples
Functions defined only at part of the domain are pretty common
def half(i):
if i < 0:
raise ValueError
if i & 1:
raise ValueError
return i / 2
Pretty straight. No we could get a little more confusing:
class Veggy:
def __init__(this, kind):
this.kind = kind
def pr(this):
print(this.kind)
def assess(v):
if v.kind in ['tomato', 'carrot']:
raise ValueError
v.pr()
Here Veggy used as a function proxy but also have public property kind which the assess function check before executing.
The same thing could be done with a function object since it also have additional properties besides calling.
def test(x):
return x + x
def assess4(f, *argv, **kw):
if f.__name__ != 'test':
raise ValueError
if f.__module__ != '__main__':
raise ValueError
if len(f.__code__.co_code) % 8 == 4:
raise ValueError
return f(*argv, **kw)
Writing correct wrapper becomes a challenge. That challenge could be complicated further:
def assess0(f, *argv, **kw):
if len(f.__code__.co_code) % 8 == 0:
kw['arg'] = True
return f(*argv[1:], kw)
else
kw['arg'] = False
return f(*argv[:-1], **kw)
Universal wrapper should handle both assess0 and assess4 correctly which is pretty impossible. And we have not touched id magic. Checking id would cast acceptable function in stone.
Coding etiquette
So you could not write a wrapper. Why someone bother to write one? Why function are so common when they could not guarantee behavior equivalence and could possible introduce non-trivial changes in code flow?
The simple answer is coding conventions. The famous substitution principle. Code should keep behavior properties when some object is substituted with another of the same type. Python put little focus on type nomination and enforcing. Rigorous type system is not a must, you could establish APIs and protocols through documentation and type annotation like the python language does.
Programs must be written for people to read, and only incidentally for machines to execute. OOP conventions are all in people minds. So python developers broke conventions requiring some non-stadard behavior for overriding object methods. This non-conventional OOP treatment make impossible to use decorators for transforming __init__ and __new__ methods.
The final solution
If python treats __new__ so special then generic wrapper should do the same.
# simplest wrapper possible: just pass the args
def wraps(f):
def call(*argv, **kw):
# add some meaningful manipulations later
return f(*argv, **kw)
def call_new(*argv, **kw):
# add some meaningful manipulations later
return f(argv[0])
if f is object.__new__:
return call_new
# elif other_special_case: pass
else:
return call
Now it could successfully pass the test
# check the wrapper behaves identically
class M:
def __init__(this, param):
this.param = param
M.__new__ = M.__new__
m1 = M(1)
M.__new__ = wraps(M.__new__)
m2 = M(2)
The drawback is that you should implement distinct workaround for any other convention breaking functions besides __new__ to make your function wrapper semi-applicable in universal context. But it is the best you could get out of python.
(Python 3)
First of all, I feel my title isn't quite what it should be, so if you stick through the question and come up with a better title, please feel free to edit it.
I have recently learned about Python Decorators and Python Annotations, and so I wrote two little functions to test what I have recently learned.
One of them, called wraps is supposed to mimic the behaviour of the functools wraps, while the other, called ensure_types is supposed to check, for a given function and through its annotations, if the arguments passed to some function are the correct ones.
This is the code I have for those functions:
def wraps(original_func):
"""Update the decorated function with some important attributes from the
one that was decorated so as not to lose good information"""
def update_attrs(new_func):
# Update the __annotations__
for key, value in original_func.__annotations__.items():
new_func.__annotations__[key] = value
# Update the __dict__
for key, value in original_func.__dict__.items():
new_func.__dict__[key] = value
# Copy the __name__
new_func.__name__ = original_func.__name__
# Copy the docstring (__doc__)
new_func.__doc__ = original_func.__doc__
return new_func
return update_attrs # return the decorator
def ensure_types(f):
"""Uses f.__annotations__ to check the expected types for the function's
arguments. Raises a TypeError if there is no match.
If an argument has no annotation, object is returned and so, regardless of
the argument passed, isinstance(arg, object) evaluates to True"""
#wraps(f) # say that test_types is wrapping f
def test_types(*args, **kwargs):
# Loop through the positional args, get their name and check the type
for i in range(len(args)):
# function.__code__.co_varnames is a tuple with the names of the
##arguments in the order they are in the function def statement
var_name = f.__code__.co_varnames[i]
if not(isinstance(args[i], f.__annotations__.get(var_name, object))):
raise TypeError("Bad type for function argument named '{}'".format(var_name))
# Loop through the named args, get their value and check the type
for key in kwargs.keys():
if not(isinstance(kwargs[key], f.__annotations__.get(key, object))):
raise TypeError("Bad type for function argument named '{}'".format(key))
return f(*args, **kwargs)
return test_types
Supposedly, everything is alright until now. Both the wraps and the ensure_types are supposed to be used as decorators. The problem comes when I defined a third decorator, debug_dec that is supposed to print to the console when a function is called and its arguments. The function:
def debug_dec(f):
"""Does some annoying printing for debugging purposes"""
#wraps(f)
def profiler(*args, **kwargs):
print("{} function called:".format(f.__name__))
print("\tArgs: {}".format(args))
print("\tKwargs: {}".format(kwargs))
return f(*args, **kwargs)
return profiler
That also works cooly. The problem comes when I try to use debug_dec and ensure_types at the same time.
#ensure_types
#debug_dec
def testing(x: str, y: str = "lol"):
print(x)
print(y)
testing("hahaha", 3) # raises no TypeError as expected
But if I change the order with which the decorators are called, it works just fine.
Can someone please help me understand what is going wrong, and if is there any way of solving the problem besides swapping those two lines?
EDIT
If I add the lines:
print(testing.__annotations__)
print(testing.__code__.co_varnames)
The output is as follows:
#{'y': <class 'str'>, 'x': <class 'str'>}
#('args', 'kwargs', 'i', 'var_name', 'key')
Although wraps maintains the annotations, it doesn't maintain the function signature. You see this when you print out the co_varnames. Since ensure_types does its checking by comparing the names of the arguments with the names in the annotation dict, it fails to match them up, because the wrapped function has no arguments named x and y (it just accepts generic *args and **kwargs).
You could try using the decorator module, which lets you write decorators that act like functools.wrap but also preserve the function signature (including annotations).
There is probably also a way to make it work "manually", but it would be a bit of a pain. Basically what you would have to do is have wraps store the original functions argspec (the names of its arguments), then have ensure_dict use this stored argspec instead of the wrapper's argspec in checking the types. Essentially your decorators would pass the argspec in parallel with the wrapped functions. However, using decorator is probably easier.
I have been working at learning Python over the last week and it has been going really well, however I have now been introduced to custom functions and I sort of hit a wall. While I understand the basics of it, such as:
def helloworld():
print("Hello World!")
helloworld()
I know this will print "Hello World!".
However, when it comes to getting information from one function to another, I find that confusing. ie: function1 and function2 have to work together to perform a task. Also, when to use the return command.
Lastly, when I have a list or a dictionary inside of a function. I'll make something up just as an example.
def my_function():
my_dict = {"Key1":Value1,
"Key2":Value2,
"Key3":Value3,
"Key4":Value4,}
How would I access the key/value and be able to change them from outside of the function? ie: If I had a program that let you input/output player stats or a character attributes in a video game.
I understand bits and pieces of this, it just confuses me when they have different functions calling on each other.
Also, since this was my first encounter with the custom functions. Is this really ambitious to pursue and this could be the reason for all of my confusion? Since this is the most complex program I have seen yet.
Functions in python can be both, a regular procedure and a function with a return value. Actually, every Python's function will return a value, which might be None.
If a return statement is not present, then your function will be executed completely and leave normally following the code flow, yielding None as a return value.
def foo():
pass
foo() == None
>>> True
If you have a return statement inside your function. The return value will be the return value of the expression following it. For example you may have return None and you'll be explicitly returning None. You can also have return without anything else and there you'll be implicitly returning None, or, you can have return 3 and you'll be returning value 3. This may grow in complexity.
def foo():
print('hello')
return
print('world')
foo()
>>>'hello'
def add(a,b):
return a + b
add(3,4)
>>>7
If you want a dictionary (or any object) you created inside a function, just return it:
def my_function():
my_dict = {"Key1":Value1,
"Key2":Value2,
"Key3":Value3,
"Key4":Value4,}
return my_dict
d = my_function()
d['Key1']
>>> Value1
Those are the basics of function calling. There's even more. There are functions that return functions (also treated as decorators. You can even return multiple values (not really, you'll be just returning a tuple) and a lot a fun stuff :)
def two_values():
return 3,4
a,b = two_values()
print(a)
>>>3
print(b)
>>>4
Hope this helps!
The primary way to pass information between functions is with arguments and return values. Functions can't see each other's variables. You might think that after
def my_function():
my_dict = {"Key1":Value1,
"Key2":Value2,
"Key3":Value3,
"Key4":Value4,}
my_function()
my_dict would have a value that other functions would be able to see, but it turns out that's a really brittle way to design a language. Every time you call my_function, my_dict would lose its old value, even if you were still using it. Also, you'd have to know all the names used by every function in the system when picking the names to use when writing a new function, and the whole thing would rapidly become unmanageable. Python doesn't work that way; I can't think of any languages that do.
Instead, if a function needs to make information available to its caller, return the thing its caller needs to see:
def my_function():
return {"Key1":"Value1",
"Key2":"Value2",
"Key3":"Value3",
"Key4":"Value4",}
print(my_function()['Key1']) # Prints Value1
Note that a function ends when its execution hits a return statement (even if it's in the middle of a loop); you can't execute one return now, one return later, keep going, and return two things when you hit the end of the function. If you want to do that, keep a list of things you want to return and return the list when you're done.
You send information into and out of functions with arguments and return values, respectively. This function, for example:
def square(number):
"""Return the square of a number."""
return number * number
... recieves information through the number argument, and sends information back with the return ... statement. You can use it like this:
>>> x = square(7)
>>> print(x)
49
As you can see, we passed the value 7 to the function, and it returned the value 49 (which we stored in the variable x).
Now, lets say we have another function:
def halve(number):
"""Return half of a number."""
return number / 2.0
We can send information between two functions in a couple of different ways.
Use a temporary variable:
>>> tmp = square(6)
>>> halve(tmp)
18.0
use the first function directly as an argument to the second:
>>> halve(square(8))
32.0
Which of those you use will depend partly on personal taste, and partly on how complicated the thing you're trying to do is.
Even though they have the same name, the number variables inside square() and halve() are completely separate from each other, and they're invisible outside those functions:
>>> number
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'number' is not defined
So, it's actually impossible to "see" the variable my_dict in your example function. What you would normally do is something like this:
def my_function(my_dict):
# do something with my_dict
return my_dict
... and define my_dict outside the function.
(It's actually a little bit more complicated than that - dict objects are mutable (which just means they can change), so often you don't actually need to return them. However, for the time being it's probably best to get used to returning everything, just to be safe).
I'm trying to make a function that does different things when called on different argument types. Specifically, one of the functions should have the signature
def myFunc(string, string):
and the other should have the signature
def myFunc(list):
How can I do this, given that I'm not allowed to specify whether the arguments are strings or lists?
Python does not support overloading, even by the argument count. You need to do:
def foo(string_or_list, string = None):
if isinstance(string_or_list, list):
...
else:
...
which is pretty silly, or just rethink your design to not have to overload.
There is a recipe at http://code.activestate.com/recipes/577065-type-checking-function-overloading-decorator/ which does what you want;
basically, you wrap each version of your function with #takes and #returns type declarations; when you call the function, it tries each version until it finds one that does not throw a type error.
Edit: here is a cut-down version; it's probably not a good thing to do, but if you gotta, here's how:
from collections import defaultdict
def overloaded_function(overloads):
"""
Accepts a sequence of ((arg_types,), fn) pairs
Creates a dispatcher function
"""
dispatch_table = defaultdict(list)
for arg_types,fn in overloads:
dispatch_table[len(arg_types)].append([list(arg_types),fn])
def dispatch(*args):
for arg_types,fn in dispatch_table[len(args)]:
if all(isinstance(arg, arg_type) for arg,arg_type in zip(args,arg_types)):
return fn(*args)
raise TypeError("could not find an overloaded function to match this argument list")
return dispatch
and here's how it works:
def myfn_string_string(s1, s2):
print("Got the strings {} and {}".format(s1, s2))
def myfn_list(lst):
print("Got the list {}".format(lst))
myfn = overloaded_function([
((basestring, basestring), myfn_string_string),
((list,), myfn_list)
])
myfn("abcd", "efg") # prints "Got the strings abcd and efg"
myfn(["abc", "def"]) # prints "Got the list ['abc', 'def']"
myfn(123) # raises TypeError
*args is probably the better way, but you could do something like:
def myFunc(arg1, arg2=None):
if arg2 is not None:
#do this
else:
#do that
But that's probably a terrible way of doing it.
Not a perfect solution, but if the second string argument will never legitimately be None, you could try:
def myFunc( firstArg, secondArg = None ):
if secondArg is None:
# only one arg provided, try treating firstArg as a list
else:
# two args provided, try treating them both as strings
Define it as taking variable arguments:
def myFunc(*args):
Then you can check the amount and type of the arguments via len and isinstance, and route the call to the appropriate case-specific function.
It may make for clearer code, however, if you used optional named arguments. It would be better still if you didn't use overloading at all, it's kinda not python's way.
You can't - for instance a class instance method can be inserted in run-time.
If you had multiple __init__ for a class for instance, you'd be better off with multiple #classmethod's such as from_strings or from_sequence