Python - Is it possible to get the name of the chained function? - python

I'm working on a class that basically allows for method chaining, for setting some attrbutes for different dictionaries stored.
The syntax is as follows:
d = Test()
d.connect().setAttrbutes(Message=Blah, Circle=True, Key=True)
But there can also be other instances, so, for example:
d = Test()
d.initialise().setAttrbutes(Message=Blah)
Now I believe that I can overwrite the "setattrbutes" function; I just don't want to create a function for each of the dictionary. Instead I want to capture the name of the previous chained function. So in the example above I would then be given "connect" and "initialise" so I know which dictionary to store these inside.
I hope this makes sense. Any ideas would be greatly appreciated :)
EDIT:
Would this work / Be a good work-around the above problem:
Using method overloading, I can have the following methods:
def setAttrbutes(self, Name="Foo", Message="", Circle=False):
print "Attrbutes method called for 'Foo'"
def setAttrbutes(self, Name="Boo", Message=""):
print "Attrbutes method called for 'Boo'"
So therefore, I can say which method to call depends on the name that is used. For example, in main, if I have the following:
d.setAttrbutes(Name="Foo", Message="Hello world", Circle=True) # this will call the first
d.setAttrbutes(Name="Boo", Message="Hello world") # this will call the second
Would this work, and, if not, why?

This is almost certainly a bad idea… but it is doable, in a few different ways.
Most simply, you can just have each function save its name in the object, e.g.:
def stash_name(func):
#functools.wraps(func)
def wrapper(self, *args, **kwargs):
self._stashed_name = func.__name__
return func(self, *args, **kwargs)
return wrapper
class Test(object):
#stash_name
def foo(self, x):
print x
#stash_name
def bar(self):
print
Now, after calling d.connect(), d._stashed_name will be "connect".
At the opposite extreme, if you want to get really hacky, you can do this without any cooperation from the preceding method. Just use sys._getframe(1) to find your calling context, then you can examine the frame's f_code to see how you were called.
You can use the dis module to see real bytecode. But basically, it will looks like this pseudo-bytecode:
LOAD_NAME d
LOAD_ATTR connect
<possibly other ops to prepare arguments>
CALL_FUNCTION 1 (or any other CALL_FUNCTION_* variant)
LOAD_ATTR setAttributes
<various other ops to prepare arguments>
CALL_FUNCTION 0
In this case, you can either get the attribute name from the LOAD_ATTR, or get the value that was pushed and look at its im_func.__name__, depending which one you want.
Of course there will be other cases that don't look like this. For example, let's say I called it as getattr(d, ''.join('con', 'next'))() instead of d.connect(). Or I looked up the unbound method and built a bound method on the fly. Or… What would you want to do in each such case? If you have the answers to all such cases, then you can work out the rule that generates those answers, then figure out how to get that from the bytecode.

Since you tacked on a second, completely different, question, here's a second answer.
Would this work / Be a good work-around the above problem:
Using method overloading, I can have the following methods:
No, you can't. Python does not have method overloading. If you def a method with the same name as a previous method, it just replaces the first one entirely.
There are ways to simulate method overloading by dispatching on the argument values manually within the method body. For example:
def _setAttrbutes_impl1(self, Name, Message, Circle):
pass
def _setAttrbutes_impl2(self, Name, Message):
pass
def setAttrbutes(self, Name=None, Message="", Circle=None):
if Circle is None:
return _setAttrbutes_impl2("Boo" if Name is None else Name, Message)
else:
return _setAttrbutes_impl1("Foo" if Name is None else Name, Message, Circle)
But this is rarely useful.

Related

Setting instance method systax

The following code is of course totally pointless; it's not supposed to
do anything but illustrate what I'm confused about:
class func():
def __call__(self, x):
raise Exception("func.__call__ error")
def double(x):
return 2*x
doubler = func()
doubler.__call__ = double
print doubler(2)
Can someone explain why this works? I would have expected that if I
wanted to set doubler.__call__ to something it would be a function
that takes two variables; I'd expect the code above to raise some sort
of too-many-parameters error. What gets passed to what, when?
(And then: How could I set doubler.__call__ to a function that
will actually have access to both "self" and "x"?)
(Context: An admittedly silly of-academic-interest example of why I might want to set an instance method this way: Each computable instance needs its own Approx method; creating a separate subclass for each instance seems "wrong"...)
Edit. Probably a better example, making it clear it has nothing
to do with magic-method magic:
class func():
def call(self, x):
raise Exception("func.call error")
def double(x):
return 2*x
doubler = func()
doubler.call = double
print doubler.call(2)
On third thought, probably the following is the right way to do it.
(i) Seems cleaner somehow, using the Python object model instead of
tinkering with it (ii) even 24 hours ago with my then much cruder
understanding I would have expected it to work; somehow in this
version it simply seems to make sense to me that the function passed
to the constructor should take only one variable (iii) it seems to
work regardless of whether I inherit from object, which I think means it would also work in 3.0.
class func3(object):
def __init__(self, f):
self.f = f
def __call__(self, x):
return self.f(x)
def double(x):
return 2.0*x
f3=func3(double)
print f3(2)
When you assign to doubler.__call__, you're binding an function to an instance attribute. This hides the class attribute of the same name that was created in the class statement.
Python's method binding only kicks in when you are looking up a class attribute via an instance. If the attribute's value is a descriptor (which functions are), then the descriptor's __get__ method gets called with appropriate parameters. For a function object, that binds the method to the instance (so self gets passed in automatically as the first argument).
Your first example wouldn't actually work in Python 3, only in Python 2. That's because in Python 2 you're creating an "old-style" class, which does all its method lookups on the instance. In new-style classes (which you can get in Python 2 by inheriting from object, or by default in Python 3), __special__ methods, when they're invoked by the interpreter (e.g. when you do doubler(2) to run doubler.__call__) are looked up only in the class, not in the instance's attributes. So your first example won't work with a new-style class, but the version that uses a normal method (call instead of __call__) would be fine.
This is something between an answer to the question and a continuation of the question. I was kindly referred to another thread where more or less the same question was answered. I didn't follow the answers in that thread very well, being ignorant of the things the people there are talking about, hence the Question: Is what I say below correct? (If yes then this is an answer to the question above; if no I'd appreciate someone explaining why not...)
(i) Since I assign a function to an instance of func instead of to the class, it is now an "instance method", as opposed to a "class method".
(ii) And that's why it's not passed the instance as the first parameter; that happens with class methods but not with instance methods...

How to avoid parameter type in function's name?

I have a function foo that takes a parameter stuff
Stuff can be something in a database and I'd like to create a function that takes a stuff_id, get the stuff from the db, execute foo.
Here's my attempt to solve it:
1/ Create a second function with suffix from_stuff_id
def foo(stuff):
do something
def foo_from_stuff_id(stuff_id):
stuff = get_stuff(stuff_id)
foo(stuff)
2/ Modify the first function
def foo(stuff=None, stuff_id=None):
if stuff_id:
stuff = get_stuff(stuff_id)
do something
I don't like both ways.
What's the most pythonic way to do it ?
Assuming foo is the main component of your application, your first way. Each function should have a different purpose. The moment you combine multiple purposes into a single function, you can easily get lost in long streams of code.
If, however, some other function can also provide stuff, then go with the second.
The only thing I would add is make sure you add docstrings (PEP-257) to each function to explain in words the role of the function. If necessary, you can also add comments to your code.
I'm not a big fan of type overloading in Python, but this is one of the cases where I might go for it if there's really a need:
def foo(stuff):
if isinstance(stuff, int):
stuff = get_stuff(stuff)
...
With type annotations it would look like this:
def foo(stuff: Union[int, Stuff]):
if isinstance(stuff, int):
stuff = get_stuff(stuff)
...
It basically depends on how you've defined all these functions. If you're importing get_stuff from another module the second approach is more Pythonic, because from an OOP perspective you create functions for doing one particular purpose and in this case when you've already defined the get_stuff you don't need to call it within another function.
If get_stuff it's not defined in another module, then it depends on whether you are using classes or not. If you're using a class and you want to use all these modules together you can use a method for either accessing or connecting to the data base and use that method within other methods like foo.
Example:
from some module import get_stuff
MyClass:
def __init__(self, *args, **kwargs):
# ...
self.stuff_id = kwargs['stuff_id']
def foo(self):
stuff = get_stuff(self.stuff_id)
# do stuff
Or if the functionality of foo depends on the existence of stuff you can have a global stuff and simply check for its validation :
MyClass:
def __init__(self, *args, **kwargs):
# ...
_stuff_id = kwargs['stuff_id']
self.stuff = get_stuff(_stuff_id) # can return None
def foo(self):
if self.stuff:
# do stuff
else:
# do other stuff
Or another neat design pattern for such situations might be using a dispatcher function (or method in class) that delegates the execution to different functions based on the state of stuff.
def delegator(stff, stuff_id):
if stuff: # or other condition
foo(stuff)
else:
get_stuff(stuff_id)

Calling python dictionary of function from class

I'm relatively new to python and would like to make a class that has a dictionary that corresponds to different methods in the class. I currently have :
class Test:
functions = {"Test1":test1, "Test2":test2}
def test1(self, arg1):
print "test1"
def test2(self, arg1):
print "test2" + arg1
def call_me(self, arg):
self.functions[arg](arg)
Then in my main.py I have the following:
from Test import Test
t = Test()
t.call_me('Test1')
When I call this function I get an error saying name test1 is not defined. Can anyone tell me what I am doing wrong? Any help would be greatly appreciated.
You've got multiple problems here.
First, in this line:
functions = {"Test1":test1, "Test2":test2}
At the time Python executes this line of code, there is nothing called test1 or test2, so you're going to get an immediate NameError. If you want to do things this way, you're going to have define functions after all of the functions have been defined, not before.
Next, on the same line, test1 and test2 at this point are plain-old functions. But you're trying to call them as if they were methods, with the magic self and everything. That isn't going to work. If you understand how methods work, it should be obvious that you can work around this in the call_me method:
def call_me(self, arg): self.functions[arg].__get__(self, type(self))(arg)
(In this case, you can also get away with just explicitly passing self as an extra argument. But make sure you understand why before doing that.)
Finally, you're trying to call call_me with the function test1, instead of the name 'Test1'. Presumably the whole reason you've created this mapping is so that you can use the names (dynamically, as strings), so let's actually use them:
t.call_me('Test1')
Note that if the only reason you can't use getattr is that the runtime names you want to look up aren't the same as the method names you want to define, you can always have a map of strings to strings, and look up the resulting strings with getattr, which avoids all the other problems here. Borrowing from aruisdante's answer, adding in the name lookup and remembering to pass arg:
functions = {"Test1": "test1", "Test2": "test2"}
def call_me(self, arg):
return getattr(self, self.functions[arg])(arg)
You need string quotes around your argument, and the T needs to be capitalized:
t.call_me('Test1')
However, python already has the functionality you're trying to replicate built into it via the getattr method. I.E. you can just do:
def call_me(self, arg):
return getattr(self, arg)()
Note that in this case, the name must be exactly the same as the method name or it will raise an AttributeError, so it would be:
t.call_me('test1')
UPDATE
So now that you've edited your question, it's clear what the problem is:
class Test:
functions = {"Test1":test1, "Test2":test2}
This is defining functions at the static/class scope. At this point, test1 and test2 haven't actually been created yet, and they aren't bound to a class instance (so no way to know what self should be). The 'correct' solution if you wanted to have arbitrary mappings (so getattr doesn't fit the bill) would be to move this inside an __init__():
class Test:
def __init__(self):
self._functions = {"Test1":self.test1, "Test2":self.test2}
def call_me(self, arg):
return self._functions[arg](arg)
In your dict, "Test1" and "Test2" are capitalized, while the corresponding functions are not. Change the dict keys to lowercase and everything should work.

Why do we use #staticmethod?

I just can't see why do we need to use #staticmethod. Let's start with an exmaple.
class test1:
def __init__(self,value):
self.value=value
#staticmethod
def static_add_one(value):
return value+1
#property
def new_val(self):
self.value=self.static_add_one(self.value)
return self.value
a=test1(3)
print(a.new_val) ## >>> 4
class test2:
def __init__(self,value):
self.value=value
def static_add_one(self,value):
return value+1
#property
def new_val(self):
self.value=self.static_add_one(self.value)
return self.value
b=test2(3)
print(b.new_val) ## >>> 4
In the example above, the method, static_add_one , in the two classes do not require the instance of the class(self) in calculation.
The method static_add_one in the class test1 is decorated by #staticmethod and work properly.
But at the same time, the method static_add_one in the class test2 which has no #staticmethod decoration also works properly by using a trick that provides a self in the argument but doesn't use it at all.
So what is the benefit of using #staticmethod? Does it improve the performance? Or is it just due to the zen of python which states that "Explicit is better than implicit"?
The reason to use staticmethod is if you have something that could be written as a standalone function (not part of any class), but you want to keep it within the class because it's somehow semantically related to the class. (For instance, it could be a function that doesn't require any information from the class, but whose behavior is specific to the class, so that subclasses might want to override it.) In many cases, it could make just as much sense to write something as a standalone function instead of a staticmethod.
Your example isn't really the same. A key difference is that, even though you don't use self, you still need an instance to call static_add_one --- you can't call it directly on the class with test2.static_add_one(1). So there is a genuine difference in behavior there. The most serious "rival" to a staticmethod isn't a regular method that ignores self, but a standalone function.
Today I suddenly find a benefit of using #staticmethod.
If you created a staticmethod within a class, you don't need to create an instance of the class before using the staticmethod.
For example,
class File1:
def __init__(self, path):
out=self.parse(path)
def parse(self, path):
..parsing works..
return x
class File2:
def __init__(self, path):
out=self.parse(path)
#staticmethod
def parse(path):
..parsing works..
return x
if __name__=='__main__':
path='abc.txt'
File1.parse(path) #TypeError: unbound method parse() ....
File2.parse(path) #Goal!!!!!!!!!!!!!!!!!!!!
Since the method parse is strongly related to the classes File1 and File2, it is more natural to put it inside the class. However, sometimes this parse method may also be used in other classes under some circumstances. If you want to do so using File1, you must create an instance of File1 before calling the method parse. While using staticmethod in the class File2, you may directly call the method by using the syntax File2.parse.
This makes your works more convenient and natural.
I will add something other answers didn't mention. It's not only a matter of modularity, of putting something next to other logically related parts. It's also that the method could be non-static at other point of the hierarchy (i.e. in a subclass or superclass) and thus participate in polymorphism (type based dispatching). So if you put that function outside the class you will be precluding subclasses from effectively overriding it. Now, say you realize you don't need self in function C.f of class C, you have three two options:
Put it outside the class. But we just decided against this.
Do nothing new: while unused, still keep the self parameter.
Declare you are not using the self parameter, while still letting other C methods to call f as self.f, which is required if you wish to keep open the possibility of further overrides of f that do depend on some instance state.
Option 2 demands less conceptual baggage (you already have to know about self and methods-as-bound-functions, because it's the more general case). But you still may prefer to be explicit about self not being using (and the interpreter could even reward you with some optimization, not having to partially apply a function to self). In that case, you pick option 3 and add #staticmethod on top of your function.
Use #staticmethod for methods that don't need to operate on a specific object, but that you still want located in the scope of the class (as opposed to module scope).
Your example in test2.static_add_one wastes its time passing an unused self parameter, but otherwise works the same as test1.static_add_one. Note that this extraneous parameter can't be optimized away.
One example I can think of is in a Django project I have, where a model class represents a database table, and an object of that class represents a record. There are some functions used by the class that are stand-alone and do not need an object to operate on, for example a function that converts a title into a "slug", which is a representation of the title that follows the character set limits imposed by URL syntax. The function that converts a title to a slug is declared as a staticmethod precisely to strongly associate it with the class that uses it.

I don't understand this python __del__ behaviour

Can someone explain why the following code behaves the way it does:
import types
class Dummy():
def __init__(self, name):
self.name = name
def __del__(self):
print "delete",self.name
d1 = Dummy("d1")
del d1
d1 = None
print "after d1"
d2 = Dummy("d2")
def func(self):
print "func called"
d2.func = types.MethodType(func, d2)
d2.func()
del d2
d2 = None
print "after d2"
d3 = Dummy("d3")
def func(self):
print "func called"
d3.func = types.MethodType(func, d3)
d3.func()
d3.func = None
del d3
d3 = None
print "after d3"
The output (note that the destructor for d2 is never called) is this (python 2.7)
delete d1
after d1
func called
after d2
func called
delete d3
after d3
Is there a way to "fix" the code so the destructor is called without deleting the method added? I mean, the best place to put the d2.func = None would be in the destructor!
Thanks
[edit] Based on the first few answers, I'd like to clarify that I'm not asking about the merits (or lack thereof) of using __del__. I tried to create the shortest function that would demonstrate what I consider to be non-intuitive behavior. I'm assuming a circular reference has been created, but I'm not sure why. If possible, I'd like to know how to avoid the circular reference....
You cannot assume that __del__ will ever be called - it is not a place to hope that resources are automagically deallocated. If you want to make sure that a (non-memory) resource is released, you should make a release() or similar method and then call that explicitly (or use it in a context manager as pointed out by Thanatos in comments below).
At the very least you should read the __del__ documentation very closely, and then you should probably not try to use __del__. (Also refer to the gc.garbage documentation for other bad things about __del__)
I'm providing my own answer because, while I appreciate the advice to avoid __del__, my question was how to get it to work properly for the code sample provided.
Short version: The following code uses weakref to avoid the circular reference. I thought I'd tried this before posting the question, but I guess I must have done something wrong.
import types, weakref
class Dummy():
def __init__(self, name):
self.name = name
def __del__(self):
print "delete",self.name
d2 = Dummy("d2")
def func(self):
print "func called"
d2.func = types.MethodType(func, weakref.ref(d2)) #This works
#d2.func = func.__get__(weakref.ref(d2), Dummy) #This works too
d2.func()
del d2
d2 = None
print "after d2"
Longer version:
When I posted the question, I did search for similar questions. I know you can use with instead, and that the prevailing sentiment is that __del__ is BAD.
Using with makes sense, but only in certain situations. Opening a file, reading it, and closing it is a good example where with is a perfectly good solution. You've gone a specific block of code where the object is needed, and you want to clean up the object and the end of the block.
A database connection seems to be used often as an example that doesn't work well using with, since you usually need to leave the section of code that creates the connection and have the connection closed in a more event-driven (rather than sequential) timeframe.
If with is not the right solution, I see two alternatives:
You make sure __del__ works (see this blog for a better
description of weakref usage)
You use the atexit module to run a callback when your program closes. See this topic for example.
While I tried to provide simplified code, my real problem is more event-driven, so with is not an appropriate solution (with is fine for the simplified code). I also wanted to avoid atexit, as my program can be long-running, and I want to be able to perform the cleanup as soon as possible.
So, in this specific case, I find it to be the best solution to use weakref and prevent circular references that would prevent __del__ from working.
This may be an exception to the rule, but there are use-cases where using weakref and __del__ is the right implementation, IMHO.
Instead of del, you can use the with operator.
http://effbot.org/zone/python-with-statement.htm
just like with filetype objects, you could something like
with Dummy('d1') as d:
#stuff
#d's __exit__ method is guaranteed to have been called
del doesn't call __del__
del in the way you are using removes a local variable. __del__ is called when the object is destroyed. Python as a language makes no guarantees as to when it will destroy an object.
CPython as the most common implementation of Python, uses reference counting. As a result del will often work as you expect. However it will not work in the case that you have a reference cycle.
d3 -> d3.func -> d3
Python doesn't detect this and so won't clean it up right away. And its not just reference cycles. If an exception is throw you probably want to still call your destructor. However, Python will typically hold onto to the local variables as part of its traceback.
The solution is not to depend on the __del__ method. Rather, use a context manager.
class Dummy:
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
print "Destroying", self
with Dummy() as dummy:
# Do whatever you want with dummy in here
# __exit__ will be called before you get here
This is guaranteed to work, and you can even check the parameters to see whether you are handling an exception and do something different in that case.
A full example of a context manager.
class Dummy(object):
def __init__(self, name):
self.name = name
def __enter__(self):
return self
def __exit__(self, exct_type, exce_value, traceback):
print 'cleanup:', d
def __repr__(self):
return 'Dummy(%r)' % (self.name,)
with Dummy("foo") as d:
print 'using:', d
print 'later:', d
It seems to me the real heart of the matter is here:
adding the functions is dynamic (at runtime) and not known in advance
I sense that what you are really after is a flexible way to bind different functionality to an object representing program state, also known as polymorphism. Python does that quite well, not by attaching/detaching methods, but by instantiating different classes. I suggest you look again at your class organization. Perhaps you need to separate a core, persistent data object from transient state objects. Use the has-a paradigm rather than is-a: each time state changes, you either wrap the core data in a state object, or you assign the new state object to an attribute of the core.
If you're sure you can't use that kind of pythonic OOP, you could still work around your problem another way by defining all your functions in the class to begin with and subsequently binding them to additional instance attributes (unless you're compiling these functions on the fly from user input):
class LongRunning(object):
def bark_loudly(self):
print("WOOF WOOF")
def bark_softly(self):
print("woof woof")
while True:
d = LongRunning()
d.bark = d.bark_loudly
d.bark()
d.bark = d.bark_softly
d.bark()
An alternative solution to using weakref is to dynamically bind the function to the instance only when it is called by overriding __getattr__ or __getattribute__ on the class to return func.__get__(self, type(self)) instead of just func for functions bound to the instance. This is how functions defined on the class behave. Unfortunately (for some use cases) python doesn't perform the same logic for functions attached to the instance itself, but you can modify it to do this. I've had similar problems with descriptors bound to instances. Performance here probably isn't as good as using weakref, but it is an option that will work transparently for any dynamically assigned function with the use of only python builtins.
If you find yourself doing this often, you might want a custom metaclass that does dynamic binding of instance-level functions.
Another alternative is to add the function directly to the class, which will then properly perform the binding when it's called. For a lot of use cases, this would have some headaches involved: namely, properly namespacing the functions so they don't collide. The instance id could be used for this, though, since the id in cPython isn't guaranteed unique over the life of the program, you'd need to ponder this a bit to make sure it works for your use case... in particular, you probably need to make sure you delete the class function when an object goes out of scope, and thus its id/memory address is available again. __del__ is perfect for this :). Alternatively, you could clear out all methods namespaced to the instance on object creation (in __init__ or __new__).
Another alternative (rather than messing with python magic methods) is to explicitly add a method for calling your dynamically bound functions. This has the downside that your users can't call your function using normal python syntax:
class MyClass(object):
def dynamic_func(self, func_name):
return getattr(self, func_name).__get__(self, type(self))
def call_dynamic_func(self, func_name, *args, **kwargs):
return getattr(self, func_name).__get__(self, type(self))(*args, **kwargs)
"""
Alternate without using descriptor functionality:
def call_dynamic_func(self, func_name, *args, **kwargs):
return getattr(self, func_name)(self, *args, **kwargs)
"""
Just to make this post complete, I'll show your weakref option as well:
import weakref
inst = MyClass()
def func(self):
print 'My func'
# You could also use the types modules, but the descriptor method is cleaner IMO
inst.func = func.__get__(weakref.ref(inst), type(inst))
use eval()
In [1]: int('25.0')
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-1-67d52e3d0c17> in <module>
----> 1 int('25.0')
ValueError: invalid literal for int() with base 10: '25.0'
In [2]: int(float('25.0'))
Out[2]: 25
In [3]: eval('25.0')
Out[3]: 25.0

Categories

Resources