__str__ method in python - python

I wonder why the python magic method (str) always looking for the return statement rather a print method ?
class test:
def __init__(self):
print("constructor called")
def __call__(self):
print("callable")
def __str__(self):
return "string method"
obj=test() ## print constructor called
obj() ### print callable
print(obj) ## print string method
my question is why i can't use something like this inside the str method
def __str__(self):
print("string method")

This is more to enable the conversion of an object into a str - your users don't necessary want all that stuff be printed into the terminal whenever they want to do something like
text = str(obj_instance)
They want text to contain the result, not printed out onto the terminal.
Doing it your way, the code would effectively be this
text = print(obj_instance)
Which is kind of nonsensical because the result of print isn't typically useful and text won't contain the stream of text that was passed into str type.
As you already commented (but since deleted), not providing the correct type for the return value will cause an exception to be raised, for example:
>>> class C(object):
... def __str__(self):
... return None
...
>>> str(C())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: __str__ returned non-string (type NoneType)
>>>

Because __str__() is used when you print the object, so the user is already calling print which needs the String that represent the Object - as a variable to pass back to the user's print
In the example you provided above, if __str__ would print you would get:
print(obj)
translated into:
print(print("string method"))

Related

Pickling dynamically created types

I've been trying to get some dynamically created types (i.e. ones created by calling 3-arg type()) to pickle and unpickle nicely. I've been using this module switching trick to hide the details from users of the module and give clean semantics.
I've learned several things already:
The type must be findable with getattr on the module itself
The type must be consistent with what getattr finds, that is to say if we call pickle.dumps(o) then it must be true that type(o) == getattr(module, 'name of type')
Where I'm stuck though is that there still seems to be something odd going on - it seems to be calling __getstate__ on something unexpected.
Here's the simplest setup I've got that reproduces the issue, testing with Python 3.5, but I'd like to target back to 3.3 if possible:
# module.py
import sys
import functools
def dump(self):
return b'Some data' # Dummy for testing
def undump(self, data):
print('Undump: %r' % data) # Do nothing for testing
# Cheaty demo way to make this consistent
#functools.lru_cache(maxsize=None)
def make_type(name):
return type(name, (), {
'__getstate__': dump,
'__setstate__': undump,
})
class Magic(object):
def __init__(self, path):
self.path = path
def __getattr__(self, name):
print('Getting thing: %s (from: %s)' % (name, self.path))
# for simple testing all calls to make_type must end in last x.y.z.last
if name != 'last':
if self.path:
return Magic(self.path + '.' + name)
else:
return Magic(name)
return make_type(self.path + '.' + name)
# Make the switch
sys.modules[__name__] = Magic('')
And then a quick way to exercise that:
import module
import pickle
f=module.foo.bar.woof.last()
print(f.__getstate__()) # See, *this* works
print('Pickle starts here')
print(pickle.dumps(f))
Which then gives:
Getting thing: foo (from: )
Getting thing: bar (from: foo)
Getting thing: woof (from: foo.bar)
Getting thing: last (from: foo.bar.woof)
b'Some data'
Pickle starts here
Getting thing: __spec__ (from: )
Getting thing: _initializing (from: __spec__)
Getting thing: foo (from: )
Getting thing: bar (from: foo)
Getting thing: woof (from: foo.bar)
Getting thing: last (from: foo.bar.woof)
Getting thing: __getstate__ (from: foo.bar.woof)
Traceback (most recent call last):
File "test.py", line 7, in <module>
print(pickle.dumps(f))
TypeError: 'Magic' object is not callable
I wasn't expecting to see anything looking up __getstate__ on module.foo.bar.woof, but even if we force that lookup to fail by adding:
if name == '__getstate__': raise AttributeError()
into our __getattr__ it still fails with:
Traceback (most recent call last):
File "test.py", line 7, in <module>
print(pickle.dumps(f))
_pickle.PicklingError: Can't pickle <class 'module.Magic'>: it's not the same object as module.Magic
What gives? Am I missing something with __spec__? The docs for __spec__ pretty much just stress setting it appropriately, but don't seem to actually explain much.
More importantly the bigger question is how am I supposed to go about making types I programatically generated via a pseudo module's __getattr__ implementation pickle properly?
(And obviously once I've managed to get pickle.dumps to produce something I expect pickle.loads to call undump with the same thing)
To pickle f, pickle needs to pickle f's class, module.foo.bar.woof.last.
The docs don't claim support for pickling arbitrary classes. They claim the following:
The following types can be pickled:
...
classes that are defined at the top level of a module
module.foo.bar.woof.last isn't defined at the top level of a module, even a pretend module like module. In this not-officially-supported case, the pickle logic ends up trying to pickle module.foo.bar.woof, either here:
elif parent is not module:
self.save_reduce(getattr, (parent, lastname))
or here
else if (parent != module) {
PickleState *st = _Pickle_GetGlobalState();
PyObject *reduce_value = Py_BuildValue("(O(OO))",
st->getattr, parent, lastname);
status = save_reduce(self, reduce_value, NULL);
module.foo.bar.woof can't be pickled for multiple reasons. It returns a non-callable Magic instance for all unsupported method lookups, like __getstate__, which is where your first error comes from. The module-switching thing prevents finding the Magic class to pickle it, which is where your second error comes from. There are probably more incompatibilities.
As it seems, and is already proven that making the class callable is just a drifting out another wrong direction, thankfully to this hack, I could find a getaround to make the class reiterable by its TYPE. following the context of the error <class 'module.Magic'>: it's not the same object as module.Magic the pickler doesn't iterate through the same call that renders a different type from the other one, this is a major common problem with pickling self instanciating classes, for this instance, an object by its class, there for the solution is patching the class with its type #mock.patch('module.Magic', type(module.Magic)) this is a short answer for a something.
Main.py
import module
import pickle
import mock
f=module1.foo.bar.woof.last
print(f().__getstate__()) # See, *this* works
print('Pickle starts here')
#mock.patch('module1.Magic', type(module1.Magic))
def pickleit():
return pickle.dumps(f())
print(pickleit())
Magic class
class Magic(object):
def __init__(self, value):
self.path = value
__class__: lambda x:x
def __getstate__(self):
print ("Shoot me! i'm at " + self.path )
return dump(self)
def __setstate__(self,value):
print ('something will never occur')
return undump(self,value)
def __spec__(self):
print ("Wrong side of the planet ")
def _initializing(self):
print ("Even farther lost ")
def __getattr__(self, name):
print('Getting thing: %s (from: %s)' % (name, self.path))
# for simple testing all calls to make_type must end in last x.y.z.last
if name != 'last':
if self.path:
return Magic(self.path + '.' + name)
else:
return Magic(name)
print('terminal stage' )
return make_type(self.path + '.' + name)
Even assuming this is not more of striking the ball by the edge of the bat, I could see the content dumped into my console.

Setting attributes on __func__

In the documentation on instance methods it states that:
Methods also support accessing (but not setting) the arbitrary function attributes on the underlying function object.
But I can't seem to be able to verify that restriction. I tried setting both an arbitrary value and one of the "Special Attributes" of functions:
class cls:
def foo(self):
f = self.foo.__func__
f.a = "some value" # arbitrary value
f.__doc__ = "Documentation"
print(f.a, f.__doc__)
When executed, no errors are produced and the output is as expected:
cls().foo() # prints out f.a, f.__doc__
What is it that I'm misunderstanding with the documentation?
You are misunderstanding what is being said. It says that you can access but not set the attributes of the underlying function object from the method!
>>> class Foo:
... def foo(self):
... self.foo.__func__.a = 1
... print(self.foo.a)
... self.foo.a = 2
...
>>> Foo().foo()
1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 5, in foo
AttributeError: 'method' object has no attribute 'a'
Note how foo.a is updated when you set it on the __func__ value, but you cannot set it directly using self.foo.a = value.
So the function object can be modified as you please, the method wrapper only provides read-only access to the attributes on the underlying function.

What exactly does "AttributeError: temp instance has no attribute '__getitem__'" mean?

I'm trying to understand a problem I'm having with python 2.7 right now.
Here is my code from the file test.py:
class temp:
def __init__(self):
self = dict()
self[1] = 'bla'
Then, on the terminal, I enter:
from test import temp
a=temp
if I enter a I get this:
>>> a
<test.temp instance at 0x10e3387e8>
And if I try to read a[1], I get this:
>>> a[1]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: temp instance has no attribute '__getitem__'
Why does this happen?
First, the code you posted cannot yield the error you noted. You have not instantiated the class; a is merely another name for temp. So your actual error message will be:
TypeError: 'classobj' object has no attribute '__getitem__'
Even if you instantiate it (a = temp()) it still won't do what you seem to expect. Assigning self = dict() merely changes the value of the variable self within your __init__() method; it does not do anything to the instance. When the __init__() method ends, this variable goes away, since you did not store it anywhere else.
It seems as if you might want to subclass dict instead:
class temp(dict):
def __init__(self):
self[1] = 'bla'

How do I make this test pass?

I'm trying to test the return type on __repr__. It's not a string so what is it? What's happening here?
import unittest
class MyClass(unittest.TestCase):
class Dog(object):
def __init__(self, initial_name):
self._name = initial_name
def get_self(self):
return self
def __repr__(self):
return "Dog named '" + self._name + "'"
def runTest(self):
fido = self.Dog("Fido")
self.assertEqual("Dog named 'Fido'", fido.get_self()) #Fails!
test=MyClass("runTest")
runner=unittest.TextTestRunner()
runner.run(test)
Running this gives:
FAIL: runTest (__main__.MyClass)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/xxxxx/fido.py", line 15, in runTest
self.assertEqual("Dog named 'Fido'", fido.get_self())
AssertionError: "Dog named 'Fido'" != Dog named 'Fido'
----------------------------------------------------------------------
Ran 1 test in 0.006s
FAILED (failures=1)
How can I get this test to pass?
self.assertEqual("Dog named 'Fido'", repr(fido.get_self()))
or just
self.assertEqual("Dog named 'Fido'", repr(fido))
Otherwise assertEqual is correctly telling you that the string is not equal to the object. When it renders the error message it uses repr on the object, so the error looks a bit confusing
repr return a string, but fido.get_self() return a Dog object, not a string.
When there is an assertion error, it uses "repr" to display a readable representation of your Dog instance.
Check the type of the comparison your assert does by doing print type(s). You are comparing __repr__ with str. To make it work compare both strings. See Difference between __str__ and __repr__ in Python

How to add abilities to strings in Python

This is more of a curiosity question than anything else. I'm new with Python and playing around with it. I've just looked at the base64 module. What if instead of doing:
import base64
string = 'Foo Bar'
encoded = base664.b64encode
I wanted to do something like:
>>> class b64string():
>>> <something>
>>>
>>> string = b64string('Foo Bar')
>>> string
'Foo Bar'
>>> string.encode64()
'Rm9vIEJhcg=='
>>> string
'Rm9vIEJhcg=='
>>> string.assign('QmFyIEZvbw==')
>>> string
'QmFyIEZvbw=='
>>> string.b64decode()
'Bar Foo'
>>> string
'Bar Foo'
Is there a simple, pythonic way to create that class?
I've begun with this:
>>> class b64string(base64):
... def __init__(self, v):
... self.value=v
And already I get:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Error when calling the metaclass bases
module.__init__() takes at most 2 arguments (3 given)
And don't get me started on (just to see what would happen):
>>> class b64string(str, base64): pass
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Error when calling the metaclass bases
metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases
I know how to do it manually by listing all of the attributes of base64 in a new class and calling them with the stored value as argument. But is there a neat, pythonic way to do this? Is it a bad idea to do it? The idea would be, if needed, to do it with many such modules and have "super strings" that would have as modules all the things I would need to do with them. Is that bad? Is it un-pythonic? If it is pythonic, how is it done?
I don't think creating such complex string-like classes is a good idea, but if you really want to, here's a simple snippet that runs your examples.
First, we define a class that's a generic string-wrapper. Its core is a __getattr__ function that forwards every method call to a given self.module, adding self.string as the first parameter and remembering the result on self.string.
import base64
class ModuledString(object):
def __init__(self, string):
self.string = string
def __getattr__(self, attrname):
def func(*args, **kwargs):
result = getattr(self.module, attrname)(self.string, *args, **kwargs)
self.string = result
return result
return func
def __str__(self):
return str(self.string)
Creating a string-wrapper with base64 capabilities is then easy:
class B64String(ModuledString):
module = base64
if __name__ == '__main__':
string = B64String('Foo Bar')
print string
# 'Foo Bar'
print string.b64encode()
# 'Rm9vIEJhcg=='
print string
# 'Rm9vIEJhcg=='
string.string = 'QmFyIEZvbw=='
print string
# 'QmFyIEZvbw=='
print string.b64decode()
# 'Bar Foo'
Note that the above examples work only because b64encode and b64decode take a string as the first argument and return a string as the result (there is no validation in my __getattr__ function). A random function from some random module would probably raise some kind of exception. So, after all, it would be better to restrict the usage to a predefined set of functions from a given module, but it should be easy now.
I repeat, I don't recommend using such code in any serious project, only for fun.

Categories

Resources