Is there an effective way to debug side-effects in Python?
For example, list (or any different mutable object) passed as argument to function.
>>> some_list = [1,2]
>>> some_function_with_huge_call_tree(some_list)
>>> print some_list
[2,1]
How now to determine where in the program the list has been reversed?
One more example, class instance passed as argument:
>>> print obj.x
foo
>>> some_function_with_super_huge_call_tree(obj)
>>> print obj.x
bar
Where a member of the class instance has been changed?
In both cases i want something like this:
pdb.break_on_change(some_list) # case 1
pdb.break_on_change(obj.x) # case 2
Unfortunately, the PDB does not have such a function.
In other words, I'm trying to find a common solution for all cases.
You could put in a mock object that tells you what is done with it.
Here is a (too) minimal example to trace down where a mutating function attribute is called.
def throw(*args, **kwargs):
raise AssertionError()
class Mock(object):
def __init__(self, inner):
self.__inner = inner
def __getattr__(self, name):
if name == 'mutate':
return throw
return getattr(self.__inner, name)
Then use it as in
some_function(Mock(obj))
to find out whether some_function is mutating obj.
This example lacks many features that you would rightfully expect from a mock infrastructure. Probably worst, it doesn't support magic. Maybe you can expand it to fit your needs. Actually, I would be surprised if there wouldn't already be a library that does this.
Related
I'm making a program in python in which specific instances of an object must be decorated with new functions built at runtime.
I've seen very simple examples of adding functions to objects through MethodType:
import types
def foo():
print("foo")
class A:
bar = "bar"
a = A()
a.foo = types.MethodType(foo, a)
But none of the examples I've seen show how a function added in this manner can reference to the new owner's attributes. As far as I know, even though this binds the foo() function to the instance a, foo() must still be a pure function, and cannot contain references to anything local.
In my case, I need functions to change attributes of the object they are added to. Here are two examples of the kind of thing I need to be able to do:
class A:
foo = "foo"
def printme():
print(foo)
def nofoo():
foo = "bar"
def printBar():
if foo != "foo"
self.printme()
I would then need a way to add a copy of a nofoo() or printBar() to an A object in such a way that they can access the object attributes named foo and the function named printme() correctly.
So, is this possible? Is there a way to do this kind of programming in vanilla Python? or at least Is there a programming pattern that achieves this kind of behavior?
P.S.: In my system, I also add attributes dynamically to objects. Your first thought then might be "How can I ever be sure that the object I'm adding the nofoo() function to actually has an attribute named foo?", but I also have a fairly robust tag system that makes sure that I never try to add a nofoo() function to an object that hasn't a foo variable. The reason I mention this is that solutions that look at the class definition aren't very useful to me.
As said in the comments, your function actually must take at least one parameter: self, the instance the method is being called on. The self parameter can be used as it would be used in a normal instance method. Here is an example:
>>> from types import MethodType
>>>
>>> class Class:
def method(self):
print('method run')
>>> cls = Class()
>>>
>>> def func(self): # must accept one argument, `self`
self.method()
>>> cls.func = MethodType(func, cls)
>>> cls.func()
method run
>>>
Without your function accepting self, an exception would be raised:
>>> def func():
self.method()
>>> cls.func = MethodType(func, cls)
>>> cls.func()
Traceback (most recent call last):
File "<pyshell#21>", line 1, in <module>
cls.func()
TypeError: func() takes 0 positional arguments but 1 was given
>>>
class A:
def __init__(self):
self.foo = "foo"
def printme(self):
print(self.foo)
def nofoo(self):
self.foo = "bar"
a.nofoo = types.MethodType(nofoo, a)
a.nofoo()
a.printme()
prints
bar
It's not entirely clear what you're trying to do, and I'm worried that whatever it is may be a bad idea. However, I can explain how to do what you're asking, even if it isn't what you want, or should want. I'll point out that it's very uncommon to want to do the second version below, and even rarer to want to do the third version, but Python does allow them both, because "even rarer than very uncommon" still isn't "never". And, in the same spirit…
The short answer is "yes". A dynamically-added method can access the owner object exactly the same way a normal method can.
First, here's a normal, non-dynamic method:
class C:
def meth(self):
return self.x
c = C()
c.x = 3
c.meth()
Obviously, with a normal method like this, when you call c.meth(), the c ends up as the value of the self parameter, so self.x is c.x, which is 3.
Now, here's how you dynamically add a method to a class:
class C:
pass
c = C()
c.x = 3
def meth(self):
print(self.x)
C.meth = meth
c.meth()
This is actually doing exactly the same thing. (Well, we've left another name for the same function object sitting around in globals, but that's the only difference) If C.meth is the same function it was in the first version, then obviously whatever magic made c.meth() work in the first version will do the exact same thing here.
(This used to be slightly more complicated in Python 2, because of unbound methods, and classic classes too… but fortunately you don't have to worry about that.)
Finally, here's how you dynamically add a method to an instance:
class C:
pass
c = C()
c.x = 3
def meth(self):
print(self.x)
c.meth = types.MethodType(meth, c)
c.meth()
Here, you actually have to know the magic that makes c.meth() work in the first two cases. So read the Descriptor HOWTO. After that, it should be obvious.
But if you just want to pretend that Guido is a wizard (Raymond definitely is a wizard) and it's magic… Well, in the first two versions, Guido's magic wand creates a special bound method object whenever you ask for c.meth, but even he isn't magical enough to do that when C.meth doesn't exist. But we can painstakingly create that same bound method object and store it as c.meth. After that, we're going to get the same thing we stored whenever we ask for c.meth, which we explicitly built as the same thing we got in the first two examples, so it'll obviously do the same thing.
But what if we did this:
class C:
pass
c = C()
c.x = 3
def meth(self):
print(self.x)
c.meth = meth
c.meth(c)
Here, you're not letting Guido do his descriptor magic to create c.meth, and you're not doing it manually, you're just sticking a regular function there. Which means if you want anything to show up as the self parameter, you have to explicitly pass it as an argument, as in that silly c.meth(c) line at the end. But if you're willing to do that, then even this one works. No matter how self ends up as c, self.x is going to be c.x.
In Python 3.5, say I have:
class Foo:
def __init__(self, bar, barbar):
self.bar = bar
self.barbar = barbar
I want to get the list ["bar", "barbar"] from the class.
I know I can do:
foo = Foo(1, 2)
foo.__dict__.keys()
Is there a way to get ["bar", "barbar"] without instantiating an object?
No because the attributes are dynamic (so called instance attributes). Consider the following,
class Foo:
def __init__( self ):
self.bar = 1
def twice( self ):
self.barbar = 2
f = Foo()
print( list(f.__dict__.keys() ) )
f.twice()
print( list(f.__dict__.keys() ) )
In the first print, only f.bar was set, so that's the only attributes that's shown when printing the attribute keys. But after calling f.twice(), you create a new attribute to f and now printing it show both bar and barbar.
Warning -
The following isn't foolproof in always providing 100% correct data. If you end up having something like self.y = int(1) in your __init__, you will end up including the int in your collection of attributes, which is not a wanted result for your goals. Furthermore, if you happen to add a dynamic attribute somewhere in your code like Foo.some_attr = 'pork', then you will never see that either. Be aware of what it is that you are inspecting at what point of your code, and understand why you have and don't have those inclusions in your result. There are probably other "breakages" that will not give you the full 100% expectation of what are all the attributes associated with this class, but nonetheless, the following should give you something that you might be looking for.
However, I strongly suggest you take the advice of the other answers here and the duplicate that was flagged that explains why you can't/should not do this.
The following is a form of solution you can try to mess around with:
I will expand on the inspect answer.
However, I do question (and probably would advice against) the validity of doing something like this in production-ready code. For investigative purposes, sure, knock yourself out.
By using the inspect module as indicated already in one of the other answers, you can use the getmembers method which you can then iterate through the attributes and inspect the appropriate data you wish to investigate.
For example, you are questioning the dynamic attributes in the __init__
Therefore, we can take this example to illustrate:
from inspect import getmembers
class Foo:
def __init__(self, x):
self.x = x
self.y = 1
self.z = 'chicken'
members = getmembers(Foo)
for member in members:
if '__init__' in member:
print(member[1].__code__.co_names)
Your output will be a tuple:
('x', 'y', 'z')
Ultimately, as you inspect the class Foo to get its members, there are attributes you can further investigate as you iterate each member. Each member has attributes to further inspect, and so on. For this particular example, we focus on __init__ and inspect the __code__ (per documentation: The __code__ object representing the compiled function body) attribute which has an attribute called co_names which provides a tuple of members as indicated above with the output of running the code.
Try classname.annotations.keys()
As Lærne mentioned, attributes declared inside of functions (like __init__), are dynamic. They effectively don't exist until the __init__ function is called.
However, there is a way to do what you want.
You can create class attributes, like so:
class Foo:
bar = None
barbar = None
def __init__(self, bar, barbar):
self.bar = bar
self.barbar = barbar
And you can access those attributes like this:
[var for var in vars(Foo).keys() if not var.startswith('__')]
Which gives this result:
['bar', 'barbar']
WARNING: The following question is asking for information concerning poor practices and dirty code. Developer discretion is advised.
Note: This is different than the Creating a singleton in Python question because we want to address pickling and copying as well as normal object creation.
Goal: I want to create a value (called NoParam) that simulates the behavior of None. Specifically I want any instance of NoParamType to be the same value --- i.e. have the same id --- so that the is operator always returns True between two of these values.
Why: I have configuration classes that holds onto values of parameters. Some of these parameters can take None as a valid type. However, I want these parameters to take particular default values if no type is specified. Therefore I need some sentinel that is not None to specify that no parameter was specified. This sentinel needs to be something that
could never be used as a valid parameter value. I would prefer to have a special type for this sentinel instead of using some unlikely to be used string.
For instance:
def add_param(name, value=NoParam):
if value is NoParam:
# do some something
else:
# do something else
But lets not worry so much about the why. Lets focus on the how.
What I have so far:
I can achieve most of this behavior pretty easily. I have created a special module called util_const.py. This contains a class that creates a NoParamType and then a singleton instance of the class.
class _NoParamType(object):
def __init__(self):
pass
NoParam = _NoParamType()
I'm simply assuming that a second instance of this class will never be created. Whenever I want to use the value I import util_const and use util_const.NoParam.
This works well for most cases. However, I just encountered a case where
a NoParam value was set as an object value. The object was deep copied using copy.deepcopy and thus a second NoParam instance was created.
I found a very simple workaround for this by defining the __copy__ and __deepcopy__ methods
class _NoParamType(object):
def __init__(self):
pass
def __copy__(self):
return NoParam
def __deepcopy__(self, memo):
return NoParam
NoParam = _NoParamType()
Now, if deepcopy is ever called no NoParam it simply returns the existing NoParam instance.
Now for the question:
Is there anything I can do to achieve this same behavior with pickling? Initially I thought I could define __getstate__ but the second instance has already been created at that point. Essentially I want pickle.loads(pickle.dumps(NoParam)) is NoParam to return True. Is there a way to do this (perhaps with metaclasses)?
To take it even further: is there anything I can do to ensure that only one instance of NoParam is ever created?
Solution
Big thanks to #user2357112 for answering the question about pickling.
I've also figured out how to make this class robust to module reloading as well. Here is what I've learned all put together
# -*- coding: utf-8 -*-
# util_const.py
class _NoParamType(object):
"""
Class used to define `NoParam`, a setinal that acts like None when None
might be a valid value. The value of `NoParam` is robust to reloading,
pickling, and copying.
Example:
>>> import util_const
>>> from util_const import _NoParamType, NoParam
>>> from six.moves import cPickle as pickle
>>> import copy
>>> versions = {
... 'util_const.NoParam': util_const.NoParam,
... 'NoParam': NoParam,
... '_NoParamType()': _NoParamType(),
... 'copy': copy.copy(NoParam),
... 'deepcopy': copy.deepcopy(NoParam),
... 'pickle': pickle.loads(pickle.dumps(NoParam))
... }
>>> print(versions)
>>> assert all(id(v) == id_ for v in versions.values())
>>> import imp
>>> imp.reload(util_const)
>>> assert id(NoParam) == id(util_const.NoParam)
"""
def __new__(cls):
return NoParam
def __reduce__(self):
return (_NoParamType, ())
def __copy__(self):
return NoParam
def __deepcopy__(self, memo):
return NoParam
def __call__(self, default):
pass
# Create the only instance of _NoParamType that should ever exist
# When the module is first loaded, globals() will not contain NoParam. A
# NameError will be thrown, causing the first instance of NoParam to be
# instanciated.
# If the module is reloaded (via imp.reload), globals() will contain
# NoParam. This skips the code that would instantiate a second object
# Note: it is possible to hack around this via
# >>> del util_const.NoParam
# >>> imp.reload(util_const)
try:
NoParam
except NameError:
NoParam = object.__new__(_NoParamType)
Is there anything I can do to achieve this same behavior with pickling?
Yes.
class _NoParamType(object):
def __new__(cls):
return NoParam
def __reduce__(self):
return (_NoParamType, ())
NoParam = object.__new__(_NoParamType)
To take it even further: is there anything I can do to ensure that only one instance of NoParam is ever created?
Not without writing NoParam in C. Unless you write it in C and take advantage of C API-only capabilities, it'll always be possible to do object.__new__(type(NoParam)) to get another instance.
I'm going to answer part X, not part Y:
I have configuration classes that holds onto values of parameters. Some of these parameters can take None as a valid type. However, I want these parameters to take particular default values if no type is specified.
...
For instance:
def add_param(name, value=NoParam):
if value is NoParam:
# do some something
else:
# do something else
The proper way to test whether or not value was defined by the caller is to eliminate the default value completely:
def add_param(name, **kwargs):
if 'value' not in kwargs:
# do some something
else:
# do something else
True, this breaks some introspection-based features, like linters that check whether you're passing the right arguments to functions, but that headache should be much, much less than trying to bend the identity system over backwards.
I would like to know if there is a way to create a list that will execute some actions each time I use the method append(or an other similar method).
I know that I could create a class that inherits from list and overwrite append, remove and all other methods that change content of list but I would like to know if there is an other way.
By comparison, if I want to print 'edited' each time I edit an attribute of an object I will not execute print("edited") in all methods of the class of that object. Instead, I will only overwrite __setattribute__.
I tried to create my own type which inherits of list and overwrite __setattribute__ but that doesn't work. When I use myList.append __setattribute__ isn't called. I would like to know what's realy occured when I use myList.append ? Is there some magic methods called that I could overwrite ?
I know that the question have already been asked there : What happens when you call `append` on a list?. The answer given is just, there is no answer... I hope it's a mistake.
I don't know if there is an answer to my request so I will also explain you why I'm confronted to that problem. Maybe I can search in an other direction to do what I want. I have got a class with several attributes. When an attribute is edited, I want to execute some actions. Like I explain before, to do this I am use to overwrite __setattribute__. That works fine for most of attributes. The problem is lists. If the attribute is used like this : myClass.myListAttr.append(something), __setattribute__ isn't called while the value of the attribute have changed.
The problem would be the same with dictionaries. Methods like pop doesn't call __setattribute__.
If I understand correctly, you would want something like Notify_list that would call some method (argument to the constructor in my implementation) every time a mutating method is called, so you could do something like this:
class Test:
def __init__(self):
self.list = Notify_list(self.list_changed)
def list_changed(self,method):
print("self.list.{} was called!".format(method))
>>> x = Test()
>>> x.list.append(5)
self.list.append was called!
>>> x.list.extend([1,2,3,4])
self.list.extend was called!
>>> x.list[1] = 6
self.list.__setitem__ was called!
>>> x.list
[5, 6, 2, 3, 4]
The most simple implementation of this would be to create a subclass and override every mutating method:
class Notifying_list(list):
__slots__ = ("notify",)
def __init__(self,notifying_method, *args,**kw):
self.notify = notifying_method
list.__init__(self,*args,**kw)
def append(self,*args,**kw):
self.notify("append")
return list.append(self,*args,**kw)
#etc.
This is obviously not very practical, writing the entire definition would be very tedious and very repetitive, so we can create the new subclass dynamically for any given class with functions like the following:
import functools
import types
def notify_wrapper(name,method):
"""wraps a method to call self.notify(name) when called
used by notifying_type"""
#functools.wraps(method)
def wrapper(*args,**kw):
self = args[0]
# use object.__getattribute__ instead of self.notify in
# case __getattribute__ is one of the notifying methods
# in which case self.notify will raise a RecursionError
notify = object.__getattribute__(self, "_Notify__notify")
# I'd think knowing which method was called would be useful
# you may want to change the arguments to the notify method
notify(name)
return method(*args,**kw)
return wrapper
def notifying_type(cls, notifying_methods="all"):
"""creates a subclass of cls that adds an extra function call when calling certain methods
The constructor of the subclass will take a callable as the first argument
and arguments for the original class constructor after that.
The callable will be called every time any of the methods specified in notifying_methods
is called on the object, it is passed the name of the method as the only argument
if notifying_methods is left to the special value 'all' then this uses the function
get_all_possible_method_names to create wrappers for nearly all methods."""
if notifying_methods == "all":
notifying_methods = get_all_possible_method_names(cls)
def init_for_new_cls(self,notify_method,*args,**kw):
self._Notify__notify = notify_method
namespace = {"__init__":init_for_new_cls,
"__slots__":("_Notify__notify",)}
for name in notifying_methods:
method = getattr(cls,name) #if this raises an error then you are trying to wrap a method that doesn't exist
namespace[name] = notify_wrapper(name, method)
# I figured using the type() constructor was easier then using a meta class.
return type("Notify_"+cls.__name__, (cls,), namespace)
unbound_method_or_descriptor = ( types.FunctionType,
type(list.append), #method_descriptor, not in types
type(list.__add__),#method_wrapper, also not in types
)
def get_all_possible_method_names(cls):
"""generates the names of nearly all methods the given class defines
three methods are blacklisted: __init__, __new__, and __getattribute__ for these reasons:
__init__ conflicts with the one defined in notifying_type
__new__ will not be called with a initialized instance, so there will not be a notify method to use
__getattribute__ is fine to override, just really annoying in most cases.
Note that this function may not work correctly in all cases
it was only tested with very simple classes and the builtin list."""
blacklist = ("__init__","__new__","__getattribute__")
for name,attr in vars(cls).items():
if (name not in blacklist and
isinstance(attr, unbound_method_or_descriptor)):
yield name
Once we can use notifying_type creating Notify_list or Notify_dict would be as simple as:
import collections
mutating_list_methods = set(dir(collections.MutableSequence)) - set(dir(collections.Sequence))
Notify_list = notifying_type(list, mutating_list_methods)
mutating_dict_methods = set(dir(collections.MutableMapping)) - set(dir(collections.Mapping))
Notify_dict = notifying_type(dict, mutating_dict_methods)
I have not tested this extensively and it quite possibly contains bugs / unhandled corner cases but I do know it worked correctly with list!
My class looks like this:
class A:
def __init__(self):
self.bar = []
...
#property
def foo(self):
return bar
Is there a way to find out inside foo whether a method will be called on its return value? I would like to be able to change the return value of foo depending on whether
a.foo.foobar()
or
a.foo
is called.
You could use a proxy class wrapping self.bar (or just self FWIW) in foo()) and overload the proxy's __getattr__() or __getattribute__ methods (more tricky and can slow down your program quite a bit but well...).
Now the question is: what is your real problem ? There might be better / safer solutions...
for the fun of it...
#!/usr/bin/python
import traceback
def how_was_i_called():
call=traceback.extract_stack(limit=2)[0][3]
print "I was called like this: %s"%call
how_was_i_called()
try:
how_was_i_called().foobar()
except AttributeError:
pass
returns:
I was called like this: how_was_i_called()
I was called like this: how_was_i_called().foobar()
but please do not use hacks like this in real applications...
No, there is not. foo returns, and what happens with the return value after that is an entirely separate issue.
You could do this, for example:
result = a.foo
if some_condition:
result.foobar()
e.g. accessing the foobar method on a.foo is an entirely separate expression that may or may not be executed. This could happen at a much later time too, or in a separate thread, or after serialising the object to disk, then loading it again, etc.
You can hook into attribute access on the returned object, but that'll be too late for your foo property to alter behaviour.