Replace instance with a float from within instance method? - python

I'm writing a Python script to parse some data. At the moment I'm trying to make a Class which creates "placeholder" objects. I intend to repeatedly pass each "placeholder" variables, and finally turn it into a dict, float, list or string. For reasons that would take a while to describe, it would be a lot easier if I could replace the instance by calling a method.
Here's a simplified example
class Event( dict ):
def __init__( self ):
self.sumA = 0.0
self.sumB = 0.0
def augmentA( self, i ):
self.sumA += i
def augmentB( self, i ):
self.sumB += i
def seal( self ):
if self.sumA != 0 and self.sumB != 0:
self = [ sumA, sumB ]
elif self.sumA != 0:
self = float( sumA )
elif self.sumB != 0:
self = float( sumB )
And what I want to do is:
e = Event()
e.augmentA( 1 )
e.augmentA( 2 )
e.seal()
...and have 'e' turn into a float.
What I am hoping to avoid is:
e = Event()
e.augmentA( 1 )
e.augmentA( 2 )
e = e.getSealedValue()
I totally understand that "self" in my "seal" method is just a local variable, and won't have any effect on the instance outside scope. I'm unsure however how to achieve what I want, from within the instance, where it would be most convenient for my code. I also understand I could override all the bulit-ins ( getItem, toStr ) but that complicates my code a lot.
I'm a Python noob so I'm unsure if this is even possible. Indulge me, please :)

Under some circunstances, Python allows you to change the class of an object on the fly. However, not any object can be converted to any class, as the example below demonstrates (newlines added for readability):
>>> class A(object):
... pass
...
>>> class B(object):
... pass
...
>>> a = A()
>>> type(a)
<class '__main__.A'>
>>> a.__class__ = B
>>> type(a)
<class '__main__.B'>
>>> a.__class__ = int
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: __class__ assignment: only for heap types
(I don't know the exact rules from the top of my head, but if your classes uses __slots__ for instance they must be compatible for the conversion to be possible)
However, as other answerers pointed out, in general it's a very bad idea to do so, even if there was a way to convert every reference to one object to a reference to another. I wouldn't go as far as saying never do that though, there might be legitimate uses of this technique (for instance, I see it as an easy way of implementing the State Design Pattern without creating unnecessary clutter).

Even if there was some sane way to make it work, I would avoid going down this path simply because it changes the type of the object to something completely incompatible with its existing contract.
Now, one could say "but I'm only using it in one place, other code paths won't ever see the old contract", but unfortunately that isn't an argument for this mechanism since you could simply only make the value available for the other code paths once you have the final object.
In short, don't do this and don't even try.

No, you cannot have an variable value replace itself by calling a method on it. The normal way to do this would be what you stated: e = e.getSealedValue()
You can make an object change behavior, but that's generally considered a bad idea and is likely to result in highly unmaintainable code.

Related

List not passing between methods inside a class definition

I am trying to create a simple class that inputs a list then appends to the list with a function called "add" which is also defined in the same class.
I keep getting this error: 'list' object has no attribute 'a'
class try1:
def __init__(self, a=[]):
self.a = a
print(a)
return
def add(self, b=None):
self.a.append(b)
print(a)
return
if __name__ == "__main__":
c=try1(['a', 'b', 'c'])
d = ['d', 'e', 'f']
try1.add(d)
You've got a couple different things going on here which cause this weird-looking error.
The core problem is that you're trying to call add on the class try1, not the instance you just created, in the variable c. Changing try1.add(d) to c.add(d) produces the expected result (if you also remove the print(a) or change it to print(self.a), since a doesn't exist in that scope).
The unrelated-looking error is because you made b a keyword argument. When you try to call try1.add, the first argument becomes self inside the method. Then you get an AttributeError because self is a list, which obviously doesn't have an attribute named a.
Also, you shouldn't use a mutable as a default argument. Additionally, you don't need empty return statements: any function without a return statement implicitly returns None.
A few things here. First of all, fix that indentation. Python is really specific about whitespace. Assuming you fix the indentation, you'll come into a couple of other issues. In your main method, you want to be calling c.add(d) (I think).
You're probably running into an error on line 9, which is the error I think you're talking about. In the scope of your method, the variable a doesn't exist yet. self.a does, though. You probably mean to print self.a.
As a side note, class names in Python should start with an uppercase letter. Numbers are allowed, but I'd really try and avoid them.

Python unable to compare bound method to itself

I am attempting to write a test that checks if a variable holding the bound method of a class is the same as another reference to that method. Normally this is not a problem, but it does not appear to work when done within another method of the same class. Here is a minimal example:
class TestClass:
def sample_method(self):
pass
def test_method(self, method_reference):
print(method_reference is self.sample_method)
I am really using an assert instead of print, but that is neither here nor there since the end result is the same. The test is run as follows:
instance = TestClass()
instance.test_method(instance.sample_method)
The result is False even though I am expecting it to be True. The issue manifests itself in both Python 3.5 and Python 2.7 (running under Anaconda).
I understand that bound methods are closures that are acquired by doing something like TestClass.test_method.__get__(instance, type(instance)). However, I would expect that self.sample_method is already a reference to such a closure, so that self.sample_method and instance.sample_method represent the same reference.
Part of what is confusing me here is the output of the real pytest test that I am running (working on a PR for matplotlib):
assert <bound method TestTransformFormatter.transform1 of <matplotlib.tests.test_ticker.TestTransformFormatter object at 0x7f0101077b70>> is <bound method TestTransformFormatter.transform1 of <matplotlib.tests.test_ticker.TestTransformFormatter object at 0x7f0101077b70>>
E + where <bound method TestTransformFormatter.transform1 of <matplotlib.tests.test_ticker.TestTransformFormatter object at 0x7f0101077b70>> = <matplotlib.ticker.TransformFormatter object at 0x7f0101077e10>.transform
E + and <bound method TestTransformFormatter.transform1 of <matplotlib.tests.test_ticker.TestTransformFormatter object at 0x7f0101077b70>> = <matplotlib.tests.test_ticker.TestTransformFormatter object at 0x7f0101077b70>.transform1
If I understand the output correctly, the actual comparison (the first line) is really comparing the same objects, but somehow turning up False. The only thing I can imagine at this point is that __get__ is in fact being called twice, but I know neither why/where/how, nor how to work around it.
They're not the same reference - the objects representing the two methods occupy different locations in memory:
>>> class TestClass:
... def sample_method(self):
... pass
... def test_method(self, method_reference):
... print(hex(id(method_reference)))
... print(hex(id(self.sample_method)))
...
>>> instance = TestClass()
>>> instance.test_method(instance.sample_method)
0x7fed0cc561c8
0x7fed0cc4e688
Changing to method_reference == self.sample_method will make the assert pass, though.
Edit since question was expanded: seems like a flawed test - probably the actual functionality of the code does not require the references to be the same (is), just equal (==). So your change probably didn't break anything except for the test.
While the accepted answer is in no way incorrect, it seems like it should be noted that methods are bound on attribute lookup. Furthermore, the behavior of unbound methods changes between Python 2.X and Python 3.X.
class A:
def method(self):
pass
a = A()
print(a.method is a.method) # False
print(A.method is A.method) # Python 3.X: True, Python 2.X: False

Catching calls to __str__ on memoryview() object

I'm starting to port some code from Python2.x to Python3.x, but before I make the jump I'm trying to modernise it to recent 2.7. I'm making good progress with the various tools (e.g. futurize), but one area they leave alone is the use of buffer(). In Python3.x buffer() has been removed and replaced with memoryview() which in general looks to be cleaner, but it's not a 1-to-1 swap.
One way in which they differ is:
In [1]: a = "abcdef"
In [2]: b = buffer(a)
In [3]: m = memoryview(a)
In [4]: print b, m
abcdef <memory at 0x101b600e8>
That is, str(<buffer object>) returns a byte-string containing the contents of the object, whereas memoryviews return their repr(). I think the new behaviour is better, but it's causing issues.
In particular I've got some code which is throwing an exception because it's receiving a byte-string containing <memory at 0x1016c95a8>. That suggests that there's a piece of code somewhere else that is relying on this behaviour to work, but I'm having real trouble finding it.
Does anybody have a good debugging trick for this type of problem?
One possible trick is to write a subclass of the memoryview and temporarily change all your memoryview instances to, lets say, memoryview_debug versions:
class memoryview_debug(memoryview):
def __init__(self, string):
memoryview.__init__(self, string)
def __str__(self):
# ... place a breakpoint, log the call, print stack trace, etc.
return memoryview.__str__(self)
EDIT:
As noted by OP it is apparently impossible to subclass from memoryview. Fortunately thanks to dynamic typing that's not a big problem in Python, it will be just more inconvenient. You can change inheritance to composition:
class memoryview_debug:
def __init__(self, string):
self.innerMemoryView = memoryview(string)
def tobytes(self):
return self.innerMemoryView.tobytes()
def tolist(self):
return self.innerMemoryView.tolist()
# some other methods if used by your code
# and if overridden in memoryview implementation (e.g. __len__?)
def __str__(self):
# ... place a breakpoint, log the call, print stack trace, etc.
return self.innerMemoryview.__str__()

How can I access variables from the caller, even if it isn't an enclosing scope (i.e., implement dynamic scoping)?

Consider this example:
def outer():
s_outer = "outer\n"
def inner():
s_inner = "inner\n"
do_something()
inner()
I want the code in do_something to be able to access the variables of the calling functions further up the call stack, in this case s_outer and s_inner. More generally, I want to call it from various other functions, but always execute it in their respective context and access their respective scopes (implement dynamic scoping).
I know that in Python 3.x, the nonlocal keyword allows access to s_outer from within inner. Unfortunately, that only helps with do_something if it's defined within inner. Otherwise, inner isn't a lexically enclosing scope (similarly, neither is outer, unless do_something is defined within outer).
I figured out how to inspect stack frames with the standard library inspect, and made a small accessor that I can call from within do_something() like this:
def reach(name):
for f in inspect.stack():
if name in f[0].f_locals:
return f[0].f_locals[name]
return None
and then
def do_something():
print( reach("s_outer"), reach("s_inner") )
works just fine.
Can reach be implemented more simply? How else can I solve the problem?
There is no and, in my opinion, should be no elegant way of implementing reach since that introduces a new non-standard indirection which is really hard to comprehend, debug, test and maintain. As the Python mantra (try import this) says:
Explicit is better than implicit.
So, just pass the arguments. You-from-the-future will be really grateful to you-from-today.
What I ended up doing was
scope = locals()
and make scope accessible from do_something. That way I don't have to reach, but I can still access the dictionary of local variables of the caller. This is quite similar to building a dictionary myself and passing it on.
We can get naughtier.
This is an answer to the "Is there a more elegant/shortened way to implement the reach() function?" half of the question.
We can give better syntax for the user: instead of reach("foo"), outer.foo.
This is nicer to type, and the language itself immediately tells you if you used a name that can't be a valid variable (attribute names and variable names have the same constraints).
We can raise an error, to properly distinguish "this doesn't exist" from "this was set to None".
If we actually want to smudge those cases together, we can getattr with the default parameter, or try-except AttributeError.
We can optimize: no need to pessimistically build a list big enough for all the frames at once.
In most cases we probably won't need to go all the way to the root of the call stack.
Just because we're inappropriately reaching up stack frames, violating one of the most important rules of programming to not have things far away invisibly effecting behavior, doesn't mean we can't be civilized.
If someone is trying to use this Serious API for Real Work on a Python without stack frame inspection support, we should helpfully let them know.
import inspect
class OuterScopeGetter(object):
def __getattribute__(self, name):
frame = inspect.currentframe()
if frame is None:
raise RuntimeError('cannot inspect stack frames')
sentinel = object()
frame = frame.f_back
while frame is not None:
value = frame.f_locals.get(name, sentinel)
if value is not sentinel:
return value
frame = frame.f_back
raise AttributeError(repr(name) + ' not found in any outer scope')
outer = OuterScopeGetter()
Excellent. Now we can just do:
>>> def f():
... return outer.x
...
>>> f()
Traceback (most recent call last):
...
AttributeError: 'x' not found in any outer scope
>>>
>>> x = 1
>>> f()
1
>>> x = 2
>>> f()
2
>>>
>>> def do_something():
... print(outer.y)
... print(outer.z)
...
>>> def g():
... y = 3
... def h():
... z = 4
... do_something()
... h()
...
>>> g()
3
4
Perversion elegantly achieved.
Is there a better way to solve this problem? (Other than wrapping the respective data into dicts and pass these dicts explicitly to do_something())
Passing the dicts explicitly is a better way.
What you're proposing sounds very unconventional. When code increases in size, you have to break down the code into a modular architecture, with clean APIs between modules. It also has to be something that is easy to comprehend, easy to explain, and easy to hand over to another programmer to modify/improve/debug it. What you're proposing sounds like it is not a clean API, unconventional, with a non-obvious data flow. I suspect it would probably make many programmers grumpy when they saw it. :)
Another option would be to make the functions members of a class, with the data being in the class instance. That could work well if your problem can be modelled as several functions operating on the data object.

Call python function as if it were inline

I want to have a function in a different module, that when called, has access to all variables that its caller has access to, and functions just as if its body had been pasted into the caller rather than having its own context, basically like a C Macro instead of a normal function. I know I can pass locals() into the function and then it can access the local variables as a dict, but I want to be able to access them normally (eg x.y, not x["y"] and I want all names the caller has access to not just the locals, as well as things that were 'imported' into the caller's file but not into the module that contains the function.
Is this possible to pull off?
Edit 2 Here's the simplest possible example I can come up with of what I'm really trying to do:
def getObj(expression)
ofs = expression.rfind(".")
obj = eval(expression[:ofs])
print "The part of the expression Left of the period is of type ", type(obj),
Problem is that 'expression' requires the imports and local variables of the caller in order to eval without error.In reality theres a lot more than just an eval, so I'm trying to avoid the solution of just passing locals() in and through to the eval() since that won't fix my general case problem.
And another, even uglier way to do it -- please don't do this, even if it's possible --
import sys
def insp():
l = sys._getframe(1).f_locals
expression = l["expression"]
ofs = expression.rfind(".")
expofs = expression[:ofs]
obj = eval(expofs, globals(), l)
print "The part of the expression %r Left of the period (%r) is of type %r" % (expression, expofs, type(obj)),
def foo():
derp = 5
expression = "derp.durr"
insp()
foo()
outputs
The part of the expression 'derp.durr' Left of the period ('derp') is of type (type 'int')
I don't presume this is the answer that you wanted to hear, but trying to access local variables from a caller module's scope is not a good idea. If you normally program in PHP or C, you might be used to this sort of thing?
If you still want to do this, you might consider creating a class and passing an instance of that class in place of locals():
#other_module.py
def some_func(lcls):
print(lcls.x)
Then,
>>> import other_module
>>>
>>>
>>> x = 'Hello World'
>>>
>>> class MyLocals(object):
... def __init__(self, lcls):
... self.lcls = lcls
... def __getattr__(self, name):
... return self.lcls[name]
...
>>> # Call your function with an instance of this instead.
>>> other_module.some_func(MyLocals(locals()))
'Hello World'
Give it a whirl.
Is this possible to pull off?
Yes (sort of, in a very roundabout way) which I would strongly advise against it in general (more on that later).
Consider:
myfile.py
def func_in_caller():
print "in caller"
import otherfile
globals()["imported_func"] = otherfile.remote_func
imported_func(123, globals())
otherfile.py
def remote_func(x1, extra):
for k,v in extra.iteritems():
globals()[k] = v
print x1
func_in_caller()
This yields (as expected):
123
in caller
What we're doing here is trickery: we just copy every item into another namespace in order to make this work. This can (and will) break very easily and/or lead to hard to find bugs.
There's almost certainly a better way of solving your problem / structuring your code (we need more information in general on what you're trying to achieve).
From The Zen of Python:
2) Explicit is better than implicit.
In other words, pass in the parameter and don't try to get really fancy just because you think it would be easier for you. Writing code is not just about you.

Categories

Resources