I am confused with following difference. Say I have this class with some use case:
class C:
def f(self, a, b, c=None):
print(f"Real f called with {a=}, {b=} and {c=}.")
my_c = C()
my_c.f(1, 2, c=3) # Output: Real f called with a=1, b=2 and c=3.
I can monkey patch it for purpose of testing like this:
class C:
def f(self, a, b, c=None):
print(f"Real f called with {a=}, {b=} and {c=}.")
def f_monkey_patched(self, *args, **kwargs):
print(f"Patched f called with {args=} and {kwargs=}.")
C.f = f_monkey_patched
my_c = C()
my_c.f(1, 2, c=3) # Output: Patched f called with args=(1, 2) and kwargs={'c': 3}.
So far so good. But I would like to patch only one single instance and it somehow consumes first argument:
class C:
def f(self, a, b, c=None):
print(f"Real f called with {a=}, {b=} and {c=}.")
def f_monkey_patched(self, *args, **kwargs):
print(f"Patched f called with {args=} and {kwargs=}.")
my_c = C()
my_c.f = f_monkey_patched
my_c.f(1, 2, c=3) # Output: Patched f called with args=(2,) and kwargs={'c': 3}.
Why has been first argument consumed as self instead of the instance itself?
Functions in Python are descriptors; when they're attached to a class, but looked up on an instance of the class, the descriptor protocol gets invoked, producing a bound method on your behalf (so my_c.f, where f is defined on the class, is distinct from the actual function f you originally defined, and implicitly passes my_c as self).
If you want to make a replacement that shadows the class f only for a specific instance, but still passes along the instance as self like you expect, you need to manually bind the instance to the function to create the bound method using the (admittedly terribly documented) types.MethodType:
from types import MethodType # The class implementing bound methods in Python 3
# ... Definition of C and f_monkey_patched unchanged
my_c = C()
my_c.f = MethodType(f_monkey_patched, my_c) # Creates a pre-bound method from the function and
# the instance to bind to
Being bound, my_c.f will now behave as a function that does not accept self from the caller, but when called self will be received as the instance bound to my_c at the time the MethodType was constructed.
Update with performance comparisons:
Looks like, performance-wise, all the solutions are similar enough as to be irrelevant performance-wise (Kedar's explicit use of the descriptor protocol and my use of MethodType are equivalent, and the fastest, but the percentage difference over functools.partial is so small that it won't matter under the weight of any useful work you're doing):
>>> # ... define C as per OP
>>> def f_monkey_patched(self, a): # Reduce argument count to reduce unrelated overhead
... pass
>>> from types import MethodType
>>> from functools import partial
>>> partial_c, mtype_c, desc_c = C(), C(), C()
>>> partial_c.f = partial(f_monkey_patched, partial_c)
>>> mtype_c.f = MethodType(f_monkey_patched, mtype_c)
>>> desc_c.f = f_monkey_patched.__get__(desc_c, C)
>>> %%timeit x = partial_c # Swapping in partial_c, mtype_c or desc_c
... x.f(1)
...
I'm not even going to give exact timing outputs for the IPython %%timeit magic, as it varied across runs, even on a desktop without CPU throttling involved. All I could say for sure is that partial was reliably a little slower, but only by a matter of ~1 ns (the other two typically ran in 56-56.5 ns, the partial solution typically took 56.5-57.5), and it took quite a lot of paring of extraneous stuff (e.g. switching from %timeit reading the names from global scope causing dict lookups to caching to a local name in %%timeit to use simple array lookups) to even get the differences that predictable.
Point is, any of them work, performance-wise. I'd personally recommend either my MethodType or Kedar's explicit use of descriptor protocol approach (they are identical in end result AFAICT; both produce the same bound method class), whichever one looks prettier to you, as it means the bound method is actually a bound method (so you can extract .__self__ and .__func__ like you would on any bound method constructed the normal way, where partial requires you to switch to .args[0] and .func to get the same info).
You can convert the function to bound method by calling its __get__ method (since all function as descriptors as well, thus have this method)
def t(*args, **kwargs):
print(args)
print(kwargs)
class Test():
pass
Test.t = t.__get__(Test(), Test) # binding to the instance of Test
For example
Test().t(1,2, x=1, y=2)
(<__main__.Test object at 0x7fd7f6d845f8>, 1, 2)
{'y': 2, 'x': 1}
Note that the instance is also passed as an positional argument. That is if you want you function to be instance method, the function should have been written in such a way that first argument behaves as instance of the class. Else, you can bind the function to None instance and the class, which will be like staticmethod.
Test.tt = t.__get__(None, Test)
Test.tt(1,2,x=1, y=2)
(1, 2)
{'y': 2, 'x': 1}
Furthermore, to make it a classmethod (first argument is class):
Test.ttt = t.__get__(Test, None) # bind to class
Test.ttt(1,2, x=1, y=2)
(<class '__main__.Test'>, 1, 2)
{'y': 2, 'x': 1}
When you do C.f = f_monkey_patched, and later instantiate an object of C, the function is bound to that object, effectively doing something like
obj.f = functools.partial(C.f, obj)
When you call obj.f(...), you are actually calling the partially bound function, i.e. f_monkey_patched(obj, ...)
On the other hand, doing my_c.f = f_monkey_patched, you assign the function as-is to the attribute my_c.f. When you call my_c.f(...), those arguments are passed to the function as-is, so self is the first argument you passed, i.e. 1, and the remaining arguments go to *args
Related
I have a class A
class A(object):
a = 1
def __init__(self):
self.b = 10
def foo(self):
print type(self).a
print self.b
Then I want to create a class B, which equivalent as A but with different name and value of class member a:
This is what I have tried:
class A(object):
a = 1
def __init__(self):
self.b = 10
def foo(self):
print type(self).a
print self.b
A_dummy = type('A_dummy',(object,),{})
A_attrs = {attr:getattr(A,attr) for attr in dir(A) if (not attr in dir(A_dummy))}
B = type('B',(object,),A_attrs)
B.a = 2
a = A()
a.foo()
b = B()
b.foo()
However I got an Error:
File "test.py", line 31, in main
b.foo()
TypeError: unbound method foo() must be called with A instance as first argument (got nothing instead)
So How I can cope with this sort of jobs (create a copy of an exists class)? Maybe a meta class is needed? But What I prefer is just a function FooCopyClass, such that:
B = FooCopyClass('B',A)
A.a = 10
B.a = 100
print A.a # get 10 as output
print B.a # get 100 as output
In this case, modifying the class member of B won't influence the A, vice versa.
The problem you're encountering is that looking up a method attribute on a Python 2 class creates an unbound method, it doesn't return the underlying raw function (on Python 3, unbound methods are abolished, and what you're attempting would work just fine). You need to bypass the descriptor protocol machinery that converts from function to unbound method. The easiest way is to use vars to grab the class's attribute dictionary directly:
# Make copy of A's attributes
Bvars = vars(A).copy()
# Modify the desired attribute
Bvars['a'] = 2
# Construct the new class from it
B = type('B', (object,), Bvars)
Equivalently, you could copy and initialize B in one step, then reassign B.a after:
# Still need to copy; can't initialize from the proxy type vars(SOMECLASS)
# returns to protect the class internals
B = type('B', (object,), vars(A).copy())
B.a = 2
Or for slightly non-idiomatic one-liner fun:
B = type('B', (object,), dict(vars(A), a=2))
Either way, when you're done:
B().foo()
will output:
2
10
as expected.
You may be trying to (1) create copies of classes for some reason for some real app:
in that case, try using copy.deepcopy - it includes the mechanisms to copy classes. Just change the copy __name__ attribute afterwards if needed. Works both in Python 2 or Python 3.
(2) Trying to learn and understand about Python internal class organization: in that case, there is no reason to fight with Python 2, as some wrinkles there were fixed for Python 3.
In any case, if you try using dir for fetching a class attributes, you will end up with more than you want - as dir also retrieves the methods and attributes of all superclasses. So, even if your method is made to work (in Python 2 that means getting the .im_func attribute of retrieved unbound methods, to use as raw functions on creating a new class), your class would have more methods than the original one.
Actually, both in Python 2 and Python 3, copying a class __dict__ will suffice. If you want mutable objects that are class attributes not to be shared, you should resort again to deepcopy. In Python 3:
class A(object):
b = []
def foo(self):
print(self.b)
from copy import deepcopy
def copy_class(cls, new_name):
new_cls = type(new_name, cls.__bases__, deepcopy(A.__dict__))
new_cls.__name__ = new_name
return new_cls
In Python 2, it would work almost the same, but there is no convenient way to get the explicit bases of an existing class (i.e. __bases__ is not set). You can use __mro__ for the same effect. The only thing is that all ancestor classes are passed in a hardcoded order as bases of the new class, and in a complex hierarchy you could have differences between the behaviors of B descendants and A descendants if multiple-inheritance is used.
Is it possible to do something like the following:
class foo():
def bar(): # a method that doesn't take any args
# slow calculation
return somefloat
b = bar # bar is a function but b just gives you the float attribute
f = foo()
f.b # returns somefloat but doesn't require the empty parentheses
I hope the example is clear since I'm not super clear on what the terminology is for what I want to do. My basic goal is to remove a bunch of parentheses for methods that don't have arguments to make the code cleaner to read.
The function is slow and rarely used so it would be easiest to calculate it real time rather than calculate it once ahead of time and store the variable.
Is this possible? Is it good practice? Is there a better way?
The standard way to achieve this is to use property, which is a decorator:
class Foo():
#property
def bar(self):
# slow calculation
return somefloat
f = Foo()
f.bar # returns somefloat but doesn't require the empty parentheses
A couple of things to notice:
You still need self in the method signature as usual, because sometimes you're going to need to refer to e.g. self.some_attribute inside the method. As you can see, that doesn't affect the use of the property at all.
There's no need to clutter your API with both a f.bar() method and a f.b property - it's better to decide what makes most sense for your class than offer a heap of different ways to do the same thing.
b = bar obviously wouldn't work. However a property would for the simplest "doesn't require the empty parentheses" ask of yours:
b = property(bar)
Now every access to f.b will call f.bar() "behind the curtains".
However this means that if you access f.b twice, f.bar() gets called twice, repeating the computation. If the repetition is irrelevant (i.e if the result doesn't change for repeated computations on the same object) you can do better ("caching" the result in f.b forever once it's first been computed) -- something like:
class foo(object):
def bar(self): # a method that doesn't take any args
# slow calculation
return somefloat
def _cache_bar(self):
result = self.bar()
setattr(self, 'b', result)
return result
b = property(_cache_bar)
By static method, but need to call by parentheses.
class foo(object):
#staticmethod
def bar(): # a method that doesn't take any args
# slow calculation
return "abc"
b = bar # bar is a function but b just gives you the float attribute
f = foo()
print f.b()
output:
$ python test.py
abc
The Question
I want to be able to initialize an object with a function that references the instance's attributes. What I want I tried to capture in this snippet, which produces a NameError: "global name 'self' is not defined":
class Test(object):
def __init__(self, function = None):
self.dicty = {1:{'height': 4, 'width': 2}, 2:{'height': 1, 'width': 2} }
if function == None:
self.function = lambda x : self.dicty[x]['height']
else:
self.function = function
if __name__ == '__main__':
def func1(x):
return self.dicty[x]['width']
def func2(x):
return self.dicty[x]['width']**2
G = Test(function = func1)
H = Test(function = func2)
I could solve the problem by creating a bunch of subclasses to Test, but that doesn't seem readable.
The Motivation
I am using NetworkX to do Python modeling and experiments. I was looking at the classic Albert-Barabasi Model and creating subclasses of the DiGraph class that included a Preference(self, node), Attachment(self, parent, child), and then a Grow(self, max_allowable_nodes). Instead of creating a whole bunch of subclasses like I mentioned before, I would love to be able to create an instance that modifies preference(). This would allow me to run numerical experiments without my code looking too much like Frankenstein. Looking forward to learning something new.
Edit:
Didn't know about the types class or the general idea of reflection. Obviously, still pretty new here. Really appreciate everyone answering my questions and pointing me in the right direction!
Given that the lambda you create in your __init__ refers to the instance (self), it looks like you want to attach a method to your instance, whereas here you're attaching a function. You need to create a method from the function and attach it to the instance:
import types
class Test(object):
def __init__(self, function = None):
self.dicty = {1:{'height': 4, 'width': 2}, 2:{'height': 1, 'width': 2} }
if function == None:
function = lambda self, x: self.dicty[x]['height']
self.function = types.MethodType(function, self)
A method is basically a function that is always passed the instance as the first argument, so you need to ensure any function you pass into your initialiser has self as the initial argument.
>>> t1 = Test()
>>> t1.function(1)
4
>>> t2 = Test(lambda self, x: self.dicty[x]['width'])
>>> t2.function(1)
2
When you define func1, there is no such thing as self. It's not an argument to the function, and it's not in any higher scope.
You could, instead, define a function that takes the dict you use as an argument and operates on that. In the Test class, you can then call the function on self.dicty. This would require you to change your lambda to also take dicty and x instead of just x.
def func1(dicty, x):
return dicty[x]['width']
...and in Test...
class Test(object):
# ... current code but with lambda tweak:
# lambda dicty, x: dicty[x]['height']
def do_something(self, x):
self.function(self.dicty, x)
Without seeing the rest of your code, it's hard to know what further simplifications you could make. But since all the functions seem to be using dicty[x] anyway, you could just write them to take that directly.
I am wondering if it is possible to list the variables expected by a Python function, prior to calling it, in order to pass the expected variables from a bigger dict containing a lot of variables.
I have searched the net but couldn't find anything. However, the python interpreter can show the list of expected variables, so there surely must be some way to do it in a script?
You can use either the inspect.signature() or inspect.getfullargspec() functions:
import inspect
argspec = inspect.getfullargspec(somefunction)
signature = inspect.signature(somefunction)
inspect.fullargspec returns a named tuple with 7 elements:
A list with the argument names
The name of the catchall *args parameter, if defined (None otherwise)
The name of the catchall **kwargs parameter, if defined (None otherwise)
A tuple with default values for the keyword arguments; they go with the last elements of the arguments; match these by length of the default values tuple.
A list of keyword-only parameter names
A dictionary of default values for the keyword-only parameter names, if any
and a dictionary containing the annotations
With inspect.signature() you get a Signature object, a rich object that models not only the above data as a more structured set of objects but also lets you bind values to parameters the same way a call to the function would.
Which one is better will depend on your use cases.
Demo:
>>> import inspect
>>> def foo(bar, baz, spam='eggs', *monty, python: "kwonly", spanish=42, **inquisition) -> "return annotation":
... pass
...
>>> inspect.getfullargspec(foo)
FullArgSpec(args=['bar', 'baz', 'spam'], varargs='monty', varkw='inquisition', defaults=('eggs',), kwonlyargs=['python', 'spanish'], kwonlydefaults={'spanish': 42}, annotations={'return': 'return annotation', 'python': 'kwonly'})
>>> signature = inspect.signature(foo)
>>> signature
<Signature (bar, baz, spam='eggs', *monty, python: 'kwonly', spanish=42, **inquisition) -> 'return annotation'>
>>> signature.parameters['python'].kind.description
'keyword-only'
>>> signature.bind('Eric', 'Idle', 'John', python='Cleese')
<BoundArguments (bar='Eric', baz='Idle', spam='John', python='Cleese')>
If you have a dictionary named values of possible parameter values, I'd use inspect.signature() and use the Signature.parameters mapping to match names:
posargs = [
values[param.name]
for param in signature.parameters.values()
if param.kind is Parameter.POSITIONAL_ONLY
]
skip_kinds = {Parameter.POSITIONAL_ONLY, Parameter.VAR_POSITIONAL, Parameter.VAR_KEYWORD}
kwargs = {
param.name: values[param.name]
for param in signature.parameters.values()
if param.name in values and param.kind not in skip_kinds
}
The above gives you a list of values for the positional-only parameters, and a dictionary for the rest (excepting any *args or **kwargs parameters).
Just as a side answer, I now use another approach to pass to functions the variables they expect: I pass them all.
What I mean is that I maintain a kind of global/shared dictionnary of variables in my root object (which is the parent of all other objects), eg:
shareddict = {'A': 0, 'B':'somestring'}
Then I simply pass this dict to any method of any other object that is to be called, just like this:
shareddict.update(call_to_func(**shareddict))
As you can see, we unpack all the keys/values in shareddict as keyword arguments to call_to_func(). We also update shareddict with the returned result, we'll see below why.
Now with this technic, I can simply and clearly define in my functions/methods if I need one or several variables from this dict:
my_method1(A=None, *args, **kwargs):
''' This method only computes on A '''
new_A = Do_some_stuff(A)
return {'A': new_A} # Return the new A in a dictionary to update the shared value of A in the shareddict
my_method2(B=None, *args, **kwargs):
''' This method only computes on B '''
new_B = Do_some_stuff(B)
return {'B': new_B} # Return the new B in a dictionary to update the shareddict
my_method3(A=None, B=None, *args, **kwargs):
''' This method swaps A and B, and then create a new variable C '''
return {'A': B, 'B': A, 'C': 'a_new_variable'} # Here we will update both A and B and create the new variable C
As you can notice, all the methods above return a dict of variables, which will update the shareddict, and which will get passed along to other functions.
This technic has several advantages:
Quite simple to implement
Elegant way to maintain a shared list of variables but without using a global variable
Functions and methods clearly show in their definitions what they expect (but of course one caveat is that even mandatory variables will need to be set as a keyword argument with a default value such as None, which usually means that the variable is optional, but here it's not
The methods are inheritable and overloadable
Low memory footprint since the same shareddict is passed all along
The children functions/methods define what they need (bottom-up), instead of the root defining what arguments will be passed to children (top-down)
Very easy to create/update variables
Optionally, it's VERY easy to dump all those variables in a file, eg by using json.dumps(finaldict, sort_keys=True).
Nice and easy:
import inspect #library to import
def foo(bar, baz, spam='eggs', *monty, **python): pass #example function
argspec = inspect.signature(foo)
print(argspec) #print your output
prints: (bar, baz, spam='eggs', *monty, **python)
It also works for methods inside classes (very useful!):
class Complex: #example Class
def __init__(self, realpart, imagpart): #method inside Class
... self.r = realpart
... self.i = imagpart
argspec = inspect.signature(Complex)
print(argspec)
prints: (realpart, imagpart)
Suppose I have a generic function f. I want to programmatically create a function f2 that behaves the same as f, but has a customized signature.
More detail
Given a list l and and dictionary d I want to be able to:
Set the non-keyword arguments of f2 to the strings in l
Set the keyword arguments of f2 to the keys in d and the default values to the values of d
ie. Suppose we have
l = ["x", "y"]
d = {"opt": None}
def f(*args, **kwargs):
# My code
Then I would want a function with signature:
def f2(x, y, opt=None):
# My code
A specific use case
This is just a simplified version of my specific use case. I am giving this as an example only.
My actual use case (simplified) is as follows. We have a generic initiation function:
def generic_init(self, *args, **kwargs):
"""Function to initiate a generic object"""
for name, arg in zip(self.__init_args__, args):
setattr(self, name, arg)
for name, default in self.__init_kw_args__.items():
if name in kwargs:
setattr(self, name, kwargs[name])
else:
setattr(self, name, default)
We want to use this function in a number of classes. In particular, we want to create a function __init__ that behaves like generic_init, but has the signature defined by some class variables at creation time:
class my_class:
__init_args__ = ["x", "y"]
__kw_init_args__ = {"my_opt": None}
__init__ = create_initiation_function(my_class, generic_init)
setattr(myclass, "__init__", __init__)
We want create_initiation_function to create a new function with the signature defined using __init_args__ and __kw_init_args__. Is it possible to write create_initiation_function?
Please note:
If I just wanted to improve the help, I could set __doc__.
We want to set the function signature on creation. After that, it doesn't need to be changed.
Instead of creating a function like generic_init, but with a different signature we could create a new function with the desired signature that just calls generic_init
We want to define create_initiation_function. We don't want to manually specify the new function!
Related
Preserving signatures of decorated functions: This is how to preserve a signature when decorating a function. We need to be able to set the signature to an arbitrary value
From PEP-0362, there actually does appear to be a way to set the signature in py3.3+, using the fn.__signature__ attribute:
from inspect import signature
from functools import wraps
def shared_vars(*shared_args):
"""Decorator factory that defines shared variables that are
passed to every invocation of the function"""
def decorator(f):
#wraps(f)
def wrapper(*args, **kwargs):
full_args = shared_args + args
return f(*full_args, **kwargs)
# Override signature
sig = signature(f)
sig = sig.replace(parameters=tuple(sig.parameters.values())[1:])
wrapper.__signature__ = sig
return wrapper
return decorator
Then:
>>> #shared_vars({"myvar": "myval"})
>>> def example(_state, a, b, c):
>>> return _state, a, b, c
>>> example(1,2,3)
({'myvar': 'myval'}, 1, 2, 3)
>>> str(signature(example))
'(a, b, c)'
Note: the PEP is not exactly right; Signature.replace moved the params from a positional arg to a kw-only arg.
For your usecase, having a docstring in the class/function should work -- that will show up in help() okay, and can be set programmatically (func.__doc__ = "stuff").
I can't see any way of setting the actual signature. I would have thought the functools module would have done it if it was doable, but it doesn't, at least in py2.5 and py2.6.
You can also raise a TypeError exception if you get bad input.
Hmm, if you don't mind being truly vile, you can use compile()/eval() to do it. If your desired signature is specified by arglist=["foo","bar","baz"], and your actual function is f(*args, **kwargs), you can manage:
argstr = ", ".join(arglist)
fakefunc = "def func(%s):\n return real_func(%s)\n" % (argstr, argstr)
fakefunc_code = compile(fakefunc, "fakesource", "exec")
fakeglobals = {}
eval(fakefunc_code, {"real_func": f}, fakeglobals)
f_with_good_sig = fakeglobals["func"]
help(f) # f(*args, **kwargs)
help(f_with_good_sig) # func(foo, bar, baz)
Changing the docstring and func_name should get you a complete solution. But, uh, eww...
I wrote a package named forge that solves this exact problem for Python 3.5+:
With your current code looking like this:
l=["x", "y"]
d={"opt":None}
def f(*args, **kwargs):
#My code
And your desired code looking like this:
def f2(x, y, opt=None):
#My code
Here is how you would solve that using forge:
f2 = forge.sign(
forge.arg('x'),
forge.arg('y'),
forge.arg('opt', default=None),
)(f)
As forge.sign is a wrapper, you could also use it directly:
#forge.sign(
forge.arg('x'),
forge.arg('y'),
forge.arg('opt', default=None),
)
def func(*args, **kwargs):
# signature becomes: func(x, y, opt=None)
return (args, kwargs)
assert func(1, 2) == ((), {'x': 1, 'y': 2, 'opt': None})
Have a look at makefun, it was made for that (exposing variants of functions with more or less parameters and accurate signature), and works in python 2 and 3.
Your example would be written like this:
try: # python 3.3+
from inspect import signature, Signature, Parameter
except ImportError:
from funcsigs import signature, Signature, Parameter
from makefun import create_function
def create_initiation_function(cls, gen_init):
# (1) check which signature we want to create
params = [Parameter('self', kind=Parameter.POSITIONAL_OR_KEYWORD)]
for mandatory_arg_name in cls.__init_args__:
params.append(Parameter(mandatory_arg_name, kind=Parameter.POSITIONAL_OR_KEYWORD))
for default_arg_name, default_arg_val in cls.__opt_init_args__.items():
params.append(Parameter(default_arg_name, kind=Parameter.POSITIONAL_OR_KEYWORD, default=default_arg_val))
sig = Signature(params)
# (2) create the init function dynamically
return create_function(sig, generic_init)
# ----- let's use it
def generic_init(self, *args, **kwargs):
"""Function to initiate a generic object"""
assert len(args) == 0
for name, val in kwargs.items():
setattr(self, name, val)
class my_class:
__init_args__ = ["x", "y"]
__opt_init_args__ = {"my_opt": None}
my_class.__init__ = create_initiation_function(my_class, generic_init)
and works as expected:
# check
o1 = my_class(1, 2)
assert vars(o1) == {'y': 2, 'x': 1, 'my_opt': None}
o2 = my_class(1, 2, 3)
assert vars(o2) == {'y': 2, 'x': 1, 'my_opt': 3}
o3 = my_class(my_opt='hello', y=3, x=2)
assert vars(o3) == {'y': 3, 'x': 2, 'my_opt': 'hello'}
You can't do this with live code.
That is, you seem to be wanting to take an actual, live function that looks like this:
def f(*args, **kwargs):
print args[0]
and change it to one like this:
def f(a):
print a
The reason this can't be done--at least without modifying actual Python bytecode--is because these compile differently.
The former results in a function that receives two parameters: a list and a dict, and the code you're writing operates on that list and dict. The second results in a function that receives one parameter, and which is accessed as a local variable directly. If you changed the function "signature", so to speak, it'd result in a function like this:
def f(a):
print a[0]
which obviously wouldn't work.
If you want more detail (though it doesn't really help you), a function that takes an *args or *kwargs has one or two bits set in f.func_code.co_flags; you can examine this yourself. The function that takes a regular parameter has f.func_code.co_argcount set to 1; the *args version is 0. This is what Python uses to figure out how to set up the function's stack frame when it's called, to check parameters, etc.
If you want to play around with modifying the function directly--if only to convince yourself that it won't work--see this answer for how to create a code object and live function from an existing one to modify bits of it. (This stuff is documented somewhere, but I can't find it; it's nowhere in the types module docs...)
That said, you can dynamically change the docstring of a function. Just assign to func.__doc__. Be sure to only do this at load time (from the global context or--most likely--a decorator); if you do it later on, tools that load the module to examine docstrings will never see it.
Maybe I didn't understand the problem well, but if it's about keeping the same behavior while changing the function signature, then you can do something like :
# define a function
def my_func(name, age) :
print "I am %s and I am %s" % (name, age)
# label the function with a backup name
save_func = my_func
# rewrite the function with a different signature
def my_func(age, name) :
# use the backup name to use the old function and keep the old behavior
save_func(name, age)
# you can use the new signature
my_func(35, "Bob")
This outputs :
I am Bob and I am 35
We want create_initiation_function to change the signature
Please don't do this.
We want to use this function in a number of classes
Please use ordinary inheritance.
There's no value in having the signature "changed" at run time.
You're creating a maintenance nightmare. No one else will ever bother to figure out what you're doing. They'll simply rip it out and replace it with inheritance.
Do this instead. It's simple and obvious and makes your generic init available in all subclasses in an obvious, simple, Pythonic way.
class Super( object ):
def __init__( self, *args, **kwargs ):
# the generic __init__ that we want every subclass to use
class SomeSubClass( Super ):
def __init__( self, this, that, **kwdefaults ):
super( SomeSubClass, self ).__init__( this, that, **kwdefaults )
class AnotherSubClass( Super ):
def __init__( self, x, y, **kwdefaults ):
super( AnotherSubClass, self ).__init__( x, y, **kwdefaults )
Edit 1: Answering new question:
You ask how you can create a function with this signature:
def fun(a, b, opt=None):
pass
The correct way to do that in Python is thus:
def fun(a, b, opt=None):
pass
Edit 2: Answering explanation:
"Suppose I have a generic function f. I want to programmatically create a function f2 that behaves the same as f, but has a customised signature."
def f(*args, **kw):
pass
OK, then f2 looks like so:
def f2(a, b, opt=None):
f(a, b, opt=opt)
Again, the answer to your question is so trivial, that you obviously want to know something different that what you are asking. You really do need to stop asking abstract questions, and explain your concrete problem.