Make function have ellipsis for arguments in help() function - python

If you type help(vars), the following is produced:
vars(...)
vars([object]) -> dictionary
Without arguments, equivalent to locals().
With an argument, equivalent to object.__dict__.
When I do the following:
def func(x, y): pass
help(func)
it displays this:
func(x, y)
How can I change it so that it shows up with ... between the parentheses like the built-in function vars()? (That is, func(...))
Edit: It has been suggested to use a docstring, but that won't do what I want. Here is an example:
def func(x, y):
"""func(...) -> None"""
help(func)
result:
func(x, y)
func(...) -> None
You see, x, y is still being displayed instead of ...

You have (at least) two alternatives to achieve what you want.
The best alternative would be to override the __str__ method of the inspect.Signature class. However, as it is written in C, it is read only.
So to do that you need to extend the class as following:
class MySignature(inspect.Signature):
def __str__(self):
return '(...)'
then after defining your function you execute:
func_signature = inspect.signature(func)
func.__signature__ = MySignature(func_signature.parameters.values(),
return_annotation=func_signature.return_annotation)
which would then return the following for help(func):
Help on function func in module __main__:
func(...)
(END)
With this approach inspect.signature still works:
In [1]: inspect.signature(func)
Out[1]: <MySignature (...)>
Alternatively if you don't really care about being able to properly introspect your function (and probably some other use cases), then you can define the value of your function's __signature__ to an object which is not a Signature instance:
def func(x, y):
pass
func.__signature__ = object()
help(func)
generates the result:
Help on function func in module __main__:
func(...)
(END)
But now inspect.signature(func) will raise TypeError: unexpected object <object object at 0x10182e200> in __signature__ attribute.
Note: this last version is quite hacky and I would not recommend it.
For more info on these two techniques and how the signature works see PEP 0362.
Update:
For python 2.7 you can do the following (probably better using a mock framework):
In [1]: import inspect
In [2]: def myformatargspec(*args, **kwargs):
...: return '(...)'
...:
In [3]: def func(x, y):
...: pass
...:
In [6]: inspect.formatargspec = myformatargspec
In [7]: help(func)
Help on function func in module __main__:
func(...)
(END)

Related

Regrading Python methods in same class [duplicate]

I am trying to implement method overloading in Python:
class A:
def stackoverflow(self):
print 'first method'
def stackoverflow(self, i):
print 'second method', i
ob=A()
ob.stackoverflow(2)
but the output is second method 2; similarly:
class A:
def stackoverflow(self):
print 'first method'
def stackoverflow(self, i):
print 'second method', i
ob=A()
ob.stackoverflow()
gives
Traceback (most recent call last):
File "my.py", line 9, in <module>
ob.stackoverflow()
TypeError: stackoverflow() takes exactly 2 arguments (1 given)
How do I make this work?
It's method overloading, not method overriding. And in Python, you historically do it all in one function:
class A:
def stackoverflow(self, i='some_default_value'):
print('only method')
ob=A()
ob.stackoverflow(2)
ob.stackoverflow()
See the Default Argument Values section of the Python tutorial. See "Least Astonishment" and the Mutable Default Argument for a common mistake to avoid.
See PEP 443 for information about the single dispatch generic functions added in Python 3.4:
>>> from functools import singledispatch
>>> #singledispatch
... def fun(arg, verbose=False):
... if verbose:
... print("Let me just say,", end=" ")
... print(arg)
>>> #fun.register(int)
... def _(arg, verbose=False):
... if verbose:
... print("Strength in numbers, eh?", end=" ")
... print(arg)
...
>>> #fun.register(list)
... def _(arg, verbose=False):
... if verbose:
... print("Enumerate this:")
... for i, elem in enumerate(arg):
... print(i, elem)
You can also use pythonlangutil:
from pythonlangutil.overload import Overload, signature
class A:
#Overload
#signature()
def stackoverflow(self):
print('first method')
#stackoverflow.overload
#signature("int")
def stackoverflow(self, i):
print('second method', i)
While agf was right with the answer in the past, pre-3.4, now with PEP-3124 we got our syntactic sugar.
See typing documentation for details on the #overload decorator, but note that this is really just syntactic sugar and IMHO this is all people have been arguing about ever since.
Personally, I agree that having multiple functions with different signatures makes it more readable then having a single function with 20+ arguments all set to a default value (None most of the time) and then having to fiddle around using endless if, elif, else chains to find out what the caller actually wants our function to do with the provided set of arguments. This was long overdue following the Python Zen:
Beautiful is better than ugly.
and arguably also
Simple is better than complex.
Straight from the official Python documentation linked above:
from typing import overload
#overload
def process(response: None) -> None:
...
#overload
def process(response: int) -> Tuple[int, str]:
...
#overload
def process(response: bytes) -> str:
...
def process(response):
<actual implementation>
EDIT: for anyone wondering why this example is not working as you'd expect if from other languages I'd suggest to take a look at this discussion. The #overloaded functions are not supposed to have any actual implementation. This is not obvious from the example in the Python documentation.
In Python, you don't do things that way. When people do that in languages like Java, they generally want a default value (if they don't, they generally want a method with a different name). So, in Python, you can have default values.
class A(object): # Remember the ``object`` bit when working in Python 2.x
def stackoverflow(self, i=None):
if i is None:
print 'first form'
else:
print 'second form'
As you can see, you can use this to trigger separate behaviour rather than merely having a default value.
>>> ob = A()
>>> ob.stackoverflow()
first form
>>> ob.stackoverflow(2)
second form
You can't, never need to and don't really want to.
In Python, everything is an object. Classes are things, so they are objects. So are methods.
There is an object called A which is a class. It has an attribute called stackoverflow. It can only have one such attribute.
When you write def stackoverflow(...): ..., what happens is that you create an object which is the method, and assign it to the stackoverflow attribute of A. If you write two definitions, the second one replaces the first, the same way that assignment always behaves.
You furthermore do not want to write code that does the wilder of the sorts of things that overloading is sometimes used for. That's not how the language works.
Instead of trying to define a separate function for each type of thing you could be given (which makes little sense since you don't specify types for function parameters anyway), stop worrying about what things are and start thinking about what they can do.
You not only can't write a separate one to handle a tuple vs. a list, but also don't want or need to.
All you do is take advantage of the fact that they are both, for example, iterable (i.e. you can write for element in container:). (The fact that they aren't directly related by inheritance is irrelevant.)
I write my answer in Python 3.2.1.
def overload(*functions):
return lambda *args, **kwargs: functions[len(args)](*args, **kwargs)
How it works:
overload takes any amount of callables and stores them in tuple functions, then returns lambda.
The lambda takes any amount of arguments,
then returns result of calling function stored in functions[number_of_unnamed_args_passed] called with arguments passed to the lambda.
Usage:
class A:
stackoverflow=overload( \
None, \
#there is always a self argument, so this should never get called
lambda self: print('First method'), \
lambda self, i: print('Second method', i) \
)
I think the word you're looking for is "overloading". There isn't any method overloading in Python. You can however use default arguments, as follows.
def stackoverflow(self, i=None):
if i != None:
print 'second method', i
else:
print 'first method'
When you pass it an argument, it will follow the logic of the first condition and execute the first print statement. When you pass it no arguments, it will go into the else condition and execute the second print statement.
I write my answer in Python 2.7:
In Python, method overloading is not possible; if you really want access the same function with different features, I suggest you to go for method overriding.
class Base(): # Base class
'''def add(self,a,b):
s=a+b
print s'''
def add(self,a,b,c):
self.a=a
self.b=b
self.c=c
sum =a+b+c
print sum
class Derived(Base): # Derived class
def add(self,a,b): # overriding method
sum=a+b
print sum
add_fun_1=Base() #instance creation for Base class
add_fun_2=Derived()#instance creation for Derived class
add_fun_1.add(4,2,5) # function with 3 arguments
add_fun_2.add(4,2) # function with 2 arguments
In Python, overloading is not an applied concept. However, if you are trying to create a case where, for instance, you want one initializer to be performed if passed an argument of type foo and another initializer for an argument of type bar then, since everything in Python is handled as object, you can check the name of the passed object's class type and write conditional handling based on that.
class A:
def __init__(self, arg)
# Get the Argument's class type as a String
argClass = arg.__class__.__name__
if argClass == 'foo':
print 'Arg is of type "foo"'
...
elif argClass == 'bar':
print 'Arg is of type "bar"'
...
else
print 'Arg is of a different type'
...
This concept can be applied to multiple different scenarios through different methods as needed.
In Python, you'd do this with a default argument.
class A:
def stackoverflow(self, i=None):
if i == None:
print 'first method'
else:
print 'second method',i
Python does not support method overloading like Java or C++. We may overload the methods, but we can only use the latest defined method.
# First sum method.
# Takes two argument and print their sum
def sum(a, b):
s = a + b
print(s)
# Second sum method
# Takes three argument and print their sum
def sum(a, b, c):
s = a + b + c
print(s)
# Uncommenting the below line shows an error
# sum(4, 5)
# This line will call the second sum method
sum(4, 5, 5)
We need to provide optional arguments or *args in order to provide a different number of arguments on calling.
Courtesy Python | Method Overloading
I just came across overloading.py (function overloading for Python 3) for anybody who may be interested.
From the linked repository's README file:
overloading is a module that provides function dispatching based on
the types and number of runtime arguments.
When an overloaded function is invoked, the dispatcher compares the
supplied arguments to available function signatures and calls the
implementation that provides the most accurate match.
Features
Function validation upon registration and detailed resolution rules
guarantee a unique, well-defined outcome at runtime. Implements
function resolution caching for great performance. Supports optional
parameters (default values) in function signatures. Evaluates both
positional and keyword arguments when resolving the best match.
Supports fallback functions and execution of shared code. Supports
argument polymorphism. Supports classes and inheritance, including
classmethods and staticmethods.
Python 3.x includes standard typing library which allows for method overloading with the use of #overload decorator. Unfortunately, this is to make the code more readable, as the #overload decorated methods will need to be followed by a non-decorated method that handles different arguments.
More can be found here here but for your example:
from typing import overload
from typing import Any, Optional
class A(object):
#overload
def stackoverflow(self) -> None:
print('first method')
#overload
def stackoverflow(self, i: Any) -> None:
print('second method', i)
def stackoverflow(self, i: Optional[Any] = None) -> None:
if not i:
print('first method')
else:
print('second method', i)
ob=A()
ob.stackoverflow(2)
Python added the #overload decorator with PEP-3124 to provide syntactic sugar for overloading via type inspection - instead of just working with overwriting.
Code example on overloading via #overload from PEP-3124
from overloading import overload
from collections import Iterable
def flatten(ob):
"""Flatten an object to its component iterables"""
yield ob
#overload
def flatten(ob: Iterable):
for o in ob:
for ob in flatten(o):
yield ob
#overload
def flatten(ob: basestring):
yield ob
is transformed by the #overload-decorator to:
def flatten(ob):
if isinstance(ob, basestring) or not isinstance(ob, Iterable):
yield ob
else:
for o in ob:
for ob in flatten(o):
yield ob
In the MathMethod.py file:
from multipledispatch import dispatch
#dispatch(int, int)
def Add(a, b):
return a + b
#dispatch(int, int, int)
def Add(a, b, c):
return a + b + c
#dispatch(int, int, int, int)
def Add(a, b, c, d):
return a + b + c + d
In the Main.py file
import MathMethod as MM
print(MM.Add(200, 1000, 1000, 200))
We can overload the method by using multipledispatch.
There are some libraries that make this easy:
functools - if you only need the first argument use #singledispatch
plum-dispatch - feature rich method/function overloading.
multipledispatch - alternative to plum less features but lightweight.
python 3.5 added the typing module. This included an overload decorator.
This decorator's intended purpose it to help type checkers. Functionally its just duck typing.
from typing import Optional, overload
#overload
def foo(index: int) -> str:
...
#overload
def foo(name: str) -> str:
...
#overload
def foo(name: str, index: int) -> str:
...
def foo(name: Optional[str] = None, index: Optional[int] = None) -> str:
return f"name: {name}, index: {index}"
foo(1)
foo("bar", 1)
foo("bar", None)
This leads to the following type information in vs code:
And while this can help, note that this adds lots of "weird" new syntax. Its purpose - purely type hints - is not immediately obvious.
Going with Union of types usually is a better option.

How to print the body of imported method or function in python? [duplicate]

Suppose I have a Python function as defined below:
def foo(arg1,arg2):
#do something with args
a = arg1 + arg2
return a
I can get the name of the function using foo.func_name. How can I programmatically get its source code, as I typed above?
If the function is from a source file available on the filesystem, then inspect.getsource(foo) might be of help:
If foo is defined as:
def foo(arg1,arg2):
#do something with args
a = arg1 + arg2
return a
Then:
import inspect
lines = inspect.getsource(foo)
print(lines)
Returns:
def foo(arg1,arg2):
#do something with args
a = arg1 + arg2
return a
But I believe that if the function is compiled from a string, stream or imported from a compiled file, then you cannot retrieve its source code.
The inspect module has methods for retrieving source code from python objects. Seemingly it only works if the source is located in a file though. If you had that I guess you wouldn't need to get the source from the object.
The following tests inspect.getsource(foo) using Python 3.6:
import inspect
def foo(arg1,arg2):
#do something with args
a = arg1 + arg2
return a
source_foo = inspect.getsource(foo) # foo is normal function
print(source_foo)
source_max = inspect.getsource(max) # max is a built-in function
print(source_max)
This first prints:
def foo(arg1,arg2):
#do something with args
a = arg1 + arg2
return a
Then fails on inspect.getsource(max) with the following error:
TypeError: <built-in function max> is not a module, class, method, function, traceback, frame, or code object
Just use foo?? or ??foo.
If you are using IPython, then you need to type foo?? or ??foo to see the complete source code. To see only the docstring in the function, use foo? or ?foo. This works in Jupyter notebook as well.
In [19]: foo??
Signature: foo(arg1, arg2)
Source:
def foo(arg1,arg2):
#do something with args
a = arg1 + arg2
return a
File: ~/Desktop/<ipython-input-18-3174e3126506>
Type: function
dis is your friend if the source code is not available:
>>> import dis
>>> def foo(arg1,arg2):
... #do something with args
... a = arg1 + arg2
... return a
...
>>> dis.dis(foo)
3 0 LOAD_FAST 0 (arg1)
3 LOAD_FAST 1 (arg2)
6 BINARY_ADD
7 STORE_FAST 2 (a)
4 10 LOAD_FAST 2 (a)
13 RETURN_VALUE
While I'd generally agree that inspect is a good answer, I'd disagree that you can't get the source code of objects defined in the interpreter. If you use dill.source.getsource from dill, you can get the source of functions and lambdas, even if they are defined interactively.
It also can get the code for from bound or unbound class methods and functions defined in curries... however, you might not be able to compile that code without the enclosing object's code.
>>> from dill.source import getsource
>>>
>>> def add(x,y):
... return x+y
...
>>> squared = lambda x:x**2
>>>
>>> print getsource(add)
def add(x,y):
return x+y
>>> print getsource(squared)
squared = lambda x:x**2
>>>
>>> class Foo(object):
... def bar(self, x):
... return x*x+x
...
>>> f = Foo()
>>>
>>> print getsource(f.bar)
def bar(self, x):
return x*x+x
>>>
To expand on runeh's answer:
>>> def foo(a):
... x = 2
... return x + a
>>> import inspect
>>> inspect.getsource(foo)
u'def foo(a):\n x = 2\n return x + a\n'
print inspect.getsource(foo)
def foo(a):
x = 2
return x + a
EDIT: As pointed out by #0sh this example works using ipython but not plain python. It should be fine in both, however, when importing code from source files.
Since this post is marked as the duplicate of this other post, I answer here for the "lambda" case, although the OP is not about lambdas.
So, for lambda functions that are not defined in their own lines: in addition to marko.ristin's answer, you may wish to use mini-lambda or use SymPy as suggested in this answer.
mini-lambda is lighter and supports any kind of operation, but works only for a single variable
SymPy is heavier but much more equipped with mathematical/calculus operations. In particular it can simplify your expressions. It also supports several variables in the same expression.
Here is how you can do it using mini-lambda:
from mini_lambda import x, is_mini_lambda_expr
import inspect
def get_source_code_str(f):
if is_mini_lambda_expr(f):
return f.to_string()
else:
return inspect.getsource(f)
# test it
def foo(arg1, arg2):
# do something with args
a = arg1 + arg2
return a
print(get_source_code_str(foo))
print(get_source_code_str(x ** 2))
It correctly yields
def foo(arg1, arg2):
# do something with args
a = arg1 + arg2
return a
x ** 2
See mini-lambda documentation for details. I'm the author by the way ;)
You can use inspect module to get full source code for that. You have to use getsource() method for that from the inspect module. For example:
import inspect
def get_my_code():
x = "abcd"
return x
print(inspect.getsource(get_my_code))
You can check it out more options on the below link.
retrieve your python code
to summarize :
import inspect
print( "".join(inspect.getsourcelines(foo)[0]))
Please mind that the accepted answers work only if the lambda is given on a separate line. If you pass it in as an argument to a function and would like to retrieve the code of the lambda as object, the problem gets a bit tricky since inspect will give you the whole line.
For example, consider a file test.py:
import inspect
def main():
x, f = 3, lambda a: a + 1
print(inspect.getsource(f))
if __name__ == "__main__":
main()
Executing it gives you (mind the indention!):
x, f = 3, lambda a: a + 1
To retrieve the source code of the lambda, your best bet, in my opinion, is to re-parse the whole source file (by using f.__code__.co_filename) and match the lambda AST node by the line number and its context.
We had to do precisely that in our design-by-contract library icontract since we had to parse the lambda functions we pass in as arguments to decorators. It is too much code to paste here, so have a look at the implementation of this function.
If you're strictly defining the function yourself and it's a relatively short definition, a solution without dependencies would be to define the function in a string and assign the eval() of the expression to your function.
E.g.
funcstring = 'lambda x: x> 5'
func = eval(funcstring)
then optionally to attach the original code to the function:
func.source = funcstring
RafaƂ Dowgird's answer states:
I believe that if the function is compiled from a string, stream or imported from a compiled file, then you cannot retrieve its source code.
However, it is possible to retrieve the source code of a function compiled from a string, provided that the compiling code also added an entry to the linecache.cache dict:
import linecache
import inspect
script = '''
def add_nums(a, b):
return a + b
'''
bytecode = compile(script, 'unique_filename', 'exec')
tmp = {}
eval(bytecode, {}, tmp)
add_nums = tmp["add_nums"]
linecache.cache['unique_filename'] = (
len(script),
None,
script.splitlines(True),
'unique_filename',
)
print(inspect.getsource(add_nums))
# prints:
# """
# def add_nums(a, b):
# return a + b
# """
This is how the attrs library creates various methods for classes automatically, given a set of attributes that the class expects to be initialized with. See their source code here. As the source explains, this is a feature primarily intended to enable debuggers such as PDB to step through the code.
I believe that variable names aren't stored in pyc/pyd/pyo files, so you can not retrieve the exact code lines if you don't have source files.

Call a function without ()

Can I somehow call a function without the ()? Maybe abusing the magic methods such as __call__() somehow?
I'd like to be able to something similar to
from IPython import embed as qq
but call embed() only via qq rather than qq()
This is more out of curiosity, and as a learning exercise for python, rather than practical purposes.
If you are using the REPL (the Python shell), then you can hack your way around this, because the REPL will call repr() on objects for you (which in turn invokes their __repr__ method):
from IPython import embed
class WrappedFunctionCall(object):
def __init__(self, fn):
self.fn = fn
def __repr__(self):
self.fn()
return "" # `__repr__` must return a string
qq = WrappedFunctionCall(embed)
# Typing "qq" will invoke embed now and load iPython.
But really, you should not be doing this!
And of course, it won't work outside of the REPL, because there won't be anything to call __repr__ in that case. Obviously, passing arguments isn't "supported" either.
__call__ will be invoked only if the function is invoked with (). If the function is in a class, then you can use #property decorator, to do something like this
import math
class Circle(object):
def __init__(self, radius):
self.radius = radius
#property
def area(self):
return math.pi * (self.radius ** 2)
print(Circle(5).area)
# 78.53981633974483
Read more about getter and setter here
If you want to learn, play around with Python.
In [1]: def foo():
...: pass
...:
In [2]: foo
Out[2]: <function __main__.foo>
In [3]: foo()
In [4]: bar = foo
In [5]: bar
Out[5]: <function __main__.foo>
In [6]: bar()
As you see, foo will not call the function, it will return it. And that is a good thing becaus you can pass it as an argument and assign it, for example bar = foo.
In pure Python, the only way I can think of is to use an object and a property:
>>> class Wtf(object):
... #property
... def yadda(self):
... print "Yadda"
...
>>> w = Wtf()
>>> w.yadda
Yadda
>>>
Else you might want to check IPython's doc on how to define your own custom "magic" commands: http://ipython.org/ipython-doc/dev/config/custommagics.html
You can call the function foo without using () (on that function):
def call_function(fun_name,*args):
return fun_name(*args)
def foo(a,b):
return a+b
print call_function(foo,1,2)
# Prints 3
Note that this answer isn't entirely serious, but it does contain a snippet of interesting Python code.

How to keep help strings the same when applying decorators?

How can I keep help strings in functions to be visible after applying a decorator?
Right now the doc string is (partially) replaced with that of the inner function of the decorator.
def deco(fn):
def x(*args, **kwargs):
return fn(*args, **kwargs)
x.func_doc = fn.func_doc
x.func_name = fn.func_name
return x
#deco
def y(a, b):
"""This is Y"""
pass
def z(c, d):
"""This is Z"""
pass
help(y) # 1
help(z) # 2
In the Y function, required arguments aren't shown in the help. The user may assume it takes any arguments, while actually it doesn't.
y(*args, **kwargs) <= y(a, b) is desired
This is Y
z(c, d)
This is Z
I use help() and dir() a lot, since it's faster than pdf manuals, and want to make reliable document strings for my library and tools, but this is an obstacle.
give the decorator module a peek. i believe it does exactly what you want.
In [1]: from decorator import decorator
In [2]: #decorator
...: def say_hello(f, *args, **kwargs):
...: print "Hello!"
...: return f(*args, **kwargs)
...:
In [3]: #say_hello
...: def double(x):
...: return 2*x
...:
and info says "double(x)" in it.
What you're requesting is very hard to do "properly", because help gets the function signature from inspect.getargspec which in turn gets it from introspection which cannot directly be fooled -- to do it "properly" would mean generating a new function object on the fly (instead of a simple wrapper function) with the right argument names and numbers (and default values). Extremely hard, advanced, black-magic bytecode hacking required, in other words.
I think it may be easier to do it by monkeypatching (never a pleasant prospect, but sometimes the only way to perform customization tasks that are otherwise so difficult as to prove almost impossible, like the one you require) -- replace the real inspect.getargspec with your own lookalike function which uses a look-aside table (mapping the wrapper functions you generate to the wrapped functions' argspecs and otherwise delegating to the real thing).
import functools
import inspect
realgas = inspect.getargspec
lookaside = dict()
def fakegas(f):
if f in lookaside:
return lookaside[f]
return realgas(f)
inspect.getargspec = fakegas
def deco(fn):
#functools.wraps(fn)
def x(*args, **kwargs):
return fn(*args, **kwargs)
lookaside[x] = realgas(fn)
return x
#deco
def x(a, b=23):
"""Some doc for x."""
return a + b
help(x)
This prints, as required:
Help on function x in module __main__:
x(a, b=23)
Some doc for x.
(END)

Set function signature in Python

Suppose I have a generic function f. I want to programmatically create a function f2 that behaves the same as f, but has a customized signature.
More detail
Given a list l and and dictionary d I want to be able to:
Set the non-keyword arguments of f2 to the strings in l
Set the keyword arguments of f2 to the keys in d and the default values to the values of d
ie. Suppose we have
l = ["x", "y"]
d = {"opt": None}
def f(*args, **kwargs):
# My code
Then I would want a function with signature:
def f2(x, y, opt=None):
# My code
A specific use case
This is just a simplified version of my specific use case. I am giving this as an example only.
My actual use case (simplified) is as follows. We have a generic initiation function:
def generic_init(self, *args, **kwargs):
"""Function to initiate a generic object"""
for name, arg in zip(self.__init_args__, args):
setattr(self, name, arg)
for name, default in self.__init_kw_args__.items():
if name in kwargs:
setattr(self, name, kwargs[name])
else:
setattr(self, name, default)
We want to use this function in a number of classes. In particular, we want to create a function __init__ that behaves like generic_init, but has the signature defined by some class variables at creation time:
class my_class:
__init_args__ = ["x", "y"]
__kw_init_args__ = {"my_opt": None}
__init__ = create_initiation_function(my_class, generic_init)
setattr(myclass, "__init__", __init__)
We want create_initiation_function to create a new function with the signature defined using __init_args__ and __kw_init_args__. Is it possible to write create_initiation_function?
Please note:
If I just wanted to improve the help, I could set __doc__.
We want to set the function signature on creation. After that, it doesn't need to be changed.
Instead of creating a function like generic_init, but with a different signature we could create a new function with the desired signature that just calls generic_init
We want to define create_initiation_function. We don't want to manually specify the new function!
Related
Preserving signatures of decorated functions: This is how to preserve a signature when decorating a function. We need to be able to set the signature to an arbitrary value
From PEP-0362, there actually does appear to be a way to set the signature in py3.3+, using the fn.__signature__ attribute:
from inspect import signature
from functools import wraps
def shared_vars(*shared_args):
"""Decorator factory that defines shared variables that are
passed to every invocation of the function"""
def decorator(f):
#wraps(f)
def wrapper(*args, **kwargs):
full_args = shared_args + args
return f(*full_args, **kwargs)
# Override signature
sig = signature(f)
sig = sig.replace(parameters=tuple(sig.parameters.values())[1:])
wrapper.__signature__ = sig
return wrapper
return decorator
Then:
>>> #shared_vars({"myvar": "myval"})
>>> def example(_state, a, b, c):
>>> return _state, a, b, c
>>> example(1,2,3)
({'myvar': 'myval'}, 1, 2, 3)
>>> str(signature(example))
'(a, b, c)'
Note: the PEP is not exactly right; Signature.replace moved the params from a positional arg to a kw-only arg.
For your usecase, having a docstring in the class/function should work -- that will show up in help() okay, and can be set programmatically (func.__doc__ = "stuff").
I can't see any way of setting the actual signature. I would have thought the functools module would have done it if it was doable, but it doesn't, at least in py2.5 and py2.6.
You can also raise a TypeError exception if you get bad input.
Hmm, if you don't mind being truly vile, you can use compile()/eval() to do it. If your desired signature is specified by arglist=["foo","bar","baz"], and your actual function is f(*args, **kwargs), you can manage:
argstr = ", ".join(arglist)
fakefunc = "def func(%s):\n return real_func(%s)\n" % (argstr, argstr)
fakefunc_code = compile(fakefunc, "fakesource", "exec")
fakeglobals = {}
eval(fakefunc_code, {"real_func": f}, fakeglobals)
f_with_good_sig = fakeglobals["func"]
help(f) # f(*args, **kwargs)
help(f_with_good_sig) # func(foo, bar, baz)
Changing the docstring and func_name should get you a complete solution. But, uh, eww...
I wrote a package named forge that solves this exact problem for Python 3.5+:
With your current code looking like this:
l=["x", "y"]
d={"opt":None}
def f(*args, **kwargs):
#My code
And your desired code looking like this:
def f2(x, y, opt=None):
#My code
Here is how you would solve that using forge:
f2 = forge.sign(
forge.arg('x'),
forge.arg('y'),
forge.arg('opt', default=None),
)(f)
As forge.sign is a wrapper, you could also use it directly:
#forge.sign(
forge.arg('x'),
forge.arg('y'),
forge.arg('opt', default=None),
)
def func(*args, **kwargs):
# signature becomes: func(x, y, opt=None)
return (args, kwargs)
assert func(1, 2) == ((), {'x': 1, 'y': 2, 'opt': None})
Have a look at makefun, it was made for that (exposing variants of functions with more or less parameters and accurate signature), and works in python 2 and 3.
Your example would be written like this:
try: # python 3.3+
from inspect import signature, Signature, Parameter
except ImportError:
from funcsigs import signature, Signature, Parameter
from makefun import create_function
def create_initiation_function(cls, gen_init):
# (1) check which signature we want to create
params = [Parameter('self', kind=Parameter.POSITIONAL_OR_KEYWORD)]
for mandatory_arg_name in cls.__init_args__:
params.append(Parameter(mandatory_arg_name, kind=Parameter.POSITIONAL_OR_KEYWORD))
for default_arg_name, default_arg_val in cls.__opt_init_args__.items():
params.append(Parameter(default_arg_name, kind=Parameter.POSITIONAL_OR_KEYWORD, default=default_arg_val))
sig = Signature(params)
# (2) create the init function dynamically
return create_function(sig, generic_init)
# ----- let's use it
def generic_init(self, *args, **kwargs):
"""Function to initiate a generic object"""
assert len(args) == 0
for name, val in kwargs.items():
setattr(self, name, val)
class my_class:
__init_args__ = ["x", "y"]
__opt_init_args__ = {"my_opt": None}
my_class.__init__ = create_initiation_function(my_class, generic_init)
and works as expected:
# check
o1 = my_class(1, 2)
assert vars(o1) == {'y': 2, 'x': 1, 'my_opt': None}
o2 = my_class(1, 2, 3)
assert vars(o2) == {'y': 2, 'x': 1, 'my_opt': 3}
o3 = my_class(my_opt='hello', y=3, x=2)
assert vars(o3) == {'y': 3, 'x': 2, 'my_opt': 'hello'}
You can't do this with live code.
That is, you seem to be wanting to take an actual, live function that looks like this:
def f(*args, **kwargs):
print args[0]
and change it to one like this:
def f(a):
print a
The reason this can't be done--at least without modifying actual Python bytecode--is because these compile differently.
The former results in a function that receives two parameters: a list and a dict, and the code you're writing operates on that list and dict. The second results in a function that receives one parameter, and which is accessed as a local variable directly. If you changed the function "signature", so to speak, it'd result in a function like this:
def f(a):
print a[0]
which obviously wouldn't work.
If you want more detail (though it doesn't really help you), a function that takes an *args or *kwargs has one or two bits set in f.func_code.co_flags; you can examine this yourself. The function that takes a regular parameter has f.func_code.co_argcount set to 1; the *args version is 0. This is what Python uses to figure out how to set up the function's stack frame when it's called, to check parameters, etc.
If you want to play around with modifying the function directly--if only to convince yourself that it won't work--see this answer for how to create a code object and live function from an existing one to modify bits of it. (This stuff is documented somewhere, but I can't find it; it's nowhere in the types module docs...)
That said, you can dynamically change the docstring of a function. Just assign to func.__doc__. Be sure to only do this at load time (from the global context or--most likely--a decorator); if you do it later on, tools that load the module to examine docstrings will never see it.
Maybe I didn't understand the problem well, but if it's about keeping the same behavior while changing the function signature, then you can do something like :
# define a function
def my_func(name, age) :
print "I am %s and I am %s" % (name, age)
# label the function with a backup name
save_func = my_func
# rewrite the function with a different signature
def my_func(age, name) :
# use the backup name to use the old function and keep the old behavior
save_func(name, age)
# you can use the new signature
my_func(35, "Bob")
This outputs :
I am Bob and I am 35
We want create_initiation_function to change the signature
Please don't do this.
We want to use this function in a number of classes
Please use ordinary inheritance.
There's no value in having the signature "changed" at run time.
You're creating a maintenance nightmare. No one else will ever bother to figure out what you're doing. They'll simply rip it out and replace it with inheritance.
Do this instead. It's simple and obvious and makes your generic init available in all subclasses in an obvious, simple, Pythonic way.
class Super( object ):
def __init__( self, *args, **kwargs ):
# the generic __init__ that we want every subclass to use
class SomeSubClass( Super ):
def __init__( self, this, that, **kwdefaults ):
super( SomeSubClass, self ).__init__( this, that, **kwdefaults )
class AnotherSubClass( Super ):
def __init__( self, x, y, **kwdefaults ):
super( AnotherSubClass, self ).__init__( x, y, **kwdefaults )
Edit 1: Answering new question:
You ask how you can create a function with this signature:
def fun(a, b, opt=None):
pass
The correct way to do that in Python is thus:
def fun(a, b, opt=None):
pass
Edit 2: Answering explanation:
"Suppose I have a generic function f. I want to programmatically create a function f2 that behaves the same as f, but has a customised signature."
def f(*args, **kw):
pass
OK, then f2 looks like so:
def f2(a, b, opt=None):
f(a, b, opt=opt)
Again, the answer to your question is so trivial, that you obviously want to know something different that what you are asking. You really do need to stop asking abstract questions, and explain your concrete problem.

Categories

Resources