Python OOP how to return an object - python

I'm try to learn OOP in Python. The code below gives you a better idea of what I'm doing. I want to return an object what allows me to call other methods on that data. Is this the right way?
content = HTTP().GET(resource="photo/2/")
content.get_image()
Class
class HTTP(object):
def __init__(self):
"""
Creates a new instance of the class and assigns local variables.
"""
self._resource = None
self._payload = None
self._response = None
#property
def resource(self):
return self._resource
#resource.setter
def resource(self, value):
self._resource = "http://api.test.com/" % value
#property
def payload(self):
return self._payload
#payload.setter
def payload(self, value):
self._payload = value
#property
def response(self):
return self._response
#response.setter
def response(self, value):
self._response = value
def GET(self, resource):
"""
Sends a GET request. Returns :class:`Response` object.
:param resource: URL for the new :class:`Request` object.
"""
self.resource = resource
self.response = requests.get(self.resource).json()
return self
def get_image(self):
"""
Gets raw image from response.
:return: image
"""
return requests.get(self.response["raw"])
Later I may want to extend this and do
content = HTTP().POST(resource="photo/2/", payload='{"somekye":"somevalue"}')
or even:
content = HTTP().GET(resource="photo/2/")
content.POST(payload='{"somekye":"somevalue"}')

You don't have to do it this way. You can just modify the 'resource' field, and then simply operate on your instance of HTTP object. Like this:
content = HTTP()
content.resource(valueToSet)
content.response(valueToSet)
And that's it.

If you want to be able to read and modify an attribute, there is generally no need for getter and setter methods.
A good reason to use properties is e.g. if you need to check the incoming values and possibly raise an exception.
But in that case I would propose you use a method, so that it is obvious to the user that you are executing some code. Because you wouldn't expect an exception to occur when modiying an attribute.
If you want to prevent properties from modification, you should use a metaclass.

Related

Python Class "Main" Value

I'm pretty new to Python and have been writing a code that will read and write from a binary file.
I decided to create classes for every type of data that will be contained within the file, and to keep them organized I made one class they would all inherit, called InteriorIO. I want each class to have a read and write method which would read/write the data to/from the file. At the same time as inheriting InteriorIO however, I want them to behave like str or int in that they return the value they contain, so I'd modify either __str__ or __int__ depending on what they most closely resemble.
class InteriorIO(object):
__metaclass__ = ABCMeta
#abstractmethod
def read(this, f):
pass
#abstractmethod
def write(this, f):
pass
class byteIO(InteriorIO):
def __init__(this, value=None):
this.value = value
def read(this, f):
this.value = struct.unpack("B", f.read(1))[0]
def __str__:
return value;
class U16IO(InteriorIO):
def __init__(this, value=None):
this.value = value
def read(this, f):
this.value = struct.unpack("<H", f.read(2))[0]
def __int__:
return value;
# how I'd like it to work
f.open("C:/some/path/file.bin")
# In the file, the fileVersion is a U16
fileVersion = U16IO()
# We read the value from the file, storing it in fileVersion
fileVersion.read(f)
# writes the fileVersion that was just read from the file
print(str(fileVersion))
# now let's say we want to write the number 35 to the file in the form of a U16, so we store the value 35 in valueToWrite
valueToWrite = U16IO(35)
# prints the value 35
print(valueToWrite)
# writes the number 35 to the file
valueToWrite.write(f)
f.close()
The code on the bottom works, but the classes feel wrong and too ambiguous. I'm setting this.value, which is a random name I came up with, for every object as a sort of "main" value, and then returning said value as the type I want it to be.
What is the cleanest way to organize my classes such that they all inherit from InteriorIO, yet behave like a str or int in that they return their value?
I think in that case you may want to consider the Factory Design Pattern.
Here is a simple example to explain the idea:
class Cup:
color = ""
# This is the factory method
#staticmethod
def getCup(cupColor, value):
if (cupColor == "red"):
return RedCup(value)
elif (cupColor == "blue"):
return BlueCup(value)
else:
return None
class RedCup(Cup):
color = "Red"
def __init__(self, value):
self.value = value
class BlueCup(Cup):
color = "Blue"
def __init__(self, value):
self.value = value
# A little testing
redCup = Cup.getCup("red", 10)
print("{} ({})".format(redCup.color, redCup.__class__.__name__))
blueCup = Cup.getCup("blue", 20)
print("{} ({})".format(blueCup.color, blueCup.__class__.__name__))
So you have a factory Cup which contains a static method getCup which given a value, will decide which object to "generate" hence the title "factory".
Then in your code you only need to call the factory's getCup method and this will return you back with the appropriate class to work with.
They way to deal with __int__ and __str__ I think in the classes where you are missing either, just implement it and return back None. So your U16IO should implement a __str__ method that return None and your byteIO should implement a __int__ that also return None.
Why are you even using classes here? It seems overly complicated.
You could just define two functions, read and write;
def bread(format, binaryfile):
return struct.unpack(format, binaryfile.read(format.calcsize()))
def bwrite(format, binaryfile, *args):
binaryfile.write(struct.pack(format, *args))

Declaring class attributes in __init__ vs with #property

If I'm creating a class that needs to store properties, when is it appropriate to use an #property decorator and when should I simply define them in __init__?
The reasons I can think of:
Say I have a class like
class Apple:
def __init__(self):
self.foodType = "fruit"
self.edible = True
self.color = "red"
This works fine. In this case, it's pretty clear to me that I shouldn't write the class as:
class Apple:
#property
def foodType(self):
return "fruit"
#property
def edible(self):
return True
#property
def color(self):
return "red"
But say I have a more complicated class, which has slower methods (say, fetching data over the internet).
I could implement this assigning attributes in __init__:
class Apple:
def __init__(self):
self.wikipedia_url = "https://en.wikipedia.org/wiki/Apple"
self.wikipedia_article_content = requests.get(self.wikipedia_url).text
or I could implement this with #property:
class Apple:
def __init__(self):
self.wikipedia_url = "https://en.wikipedia.org/wiki/Apple"
#property
def wikipedia_article_content(self):
return requests.get(self.wikipedia_url).text
In this case, the latter is about 50,000 times faster to instantiate. However, I could argue that if I were fetching wikipedia_article_content multiple times, the former is faster:
a = Apple()
a.wikipedia_article_content
a.wikipedia_article_content
a.wikipedia_article_content
In which case, the former is ~3 times faster because it has one third the number of requests.
My question
Is the only difference between assigning properties in these two ways the ones I've thought of? What else does #property allow me to do other than save time (in some cases)? For properties that take some time to assign, is there a "right way" to assign them?
Using a property allows for more complex behavior. Such as fetching the article content only when it has changed and only after a certain time period has passed.
Yes, I would suggest using property for those arguments. If you want to make it lazy or cached you can subclass property.
This is just an implementation of a lazy property. It does some operations inside the property and returns the result. This result is saved in the class with another name and each subsequent call on the property just returns the saved result.
class LazyProperty(property):
def __init__(self, *args, **kwargs):
# Let property set everything up
super(LazyProperty, self).__init__(*args, **kwargs)
# We need a name to save the cached result. If the property is called
# "test" save the result as "_test".
self._key = '_{0}'.format(self.fget.__name__)
def __get__(self, obj, owner=None):
# Called on the class not the instance
if obj is None:
return self
# Value is already fetched so just return the stored value
elif self._key in obj.__dict__:
return obj.__dict__[self._key]
# Value is not fetched, so fetch, save and return it
else:
val = self.fget(obj)
obj.__dict__[self._key] = val
return val
This allows you to calculate the value once and then always return it:
class Test:
def __init__(self):
pass
#LazyProperty
def test(self):
print('Doing some very slow stuff.')
return 100
This is how it would work (obviously you need to adapt it for your case):
>>> a = Test()
>>> a._test # The property hasn't been called so there is no result saved yet.
AttributeError: 'Test' object has no attribute '_test'
>>> a.test # First property access will evaluate the code you have in your property
Doing some very slow stuff.
100
>>> a.test # Accessing the property again will give you the saved result
100
>>> a._test # Or access the saved result directly
100

Lazy-loading variables using overloaded decorators

I have a state object that represents a system. Properties within the state object are populated from [huge] text files. As not every property is accessed every time a state instance, is created, it makes sense to lazily load them.:
class State:
def import_positions(self):
self._positions = {}
# Code which populates self._positions
#property
def positions(self):
try:
return self._positions
except AttributeError:
self.import_positions()
return self._positions
def import_forces(self):
self._forces = {}
# Code which populates self._forces
#property
def forces(self):
try:
return self._forces
except AttributeError:
self.import_forces()
return self._forces
There's a lot of repetitive boilerplate code here. Moreover, sometimes an import_abc can populate a few variables (i.e. import a few variables from a small data file if its already open).
It makes sense to overload #property such that it accepts a function to "provide" that variable, viz:
class State:
def import_positions(self):
self._positions = {}
# Code which populates self._positions
#lazyproperty(import_positions)
def positions(self):
pass
def import_forces(self):
self._forces = {}
# Code which populates self._forces and self._strain
#lazyproperty(import_forces)
def forces(self):
pass
#lazyproperty(import_forces)
def strain(self):
pass
However, I cannot seem to find a way to trace exactly what method are being called in the #property decorator. As such, I don't know how to approach overloading #property into my own #lazyproperty.
Any thoughts?
Maybe you want something like this. It's a sort of simple memoization function combined with #property.
def lazyproperty(func):
values = {}
def wrapper(self):
if not self in values:
values[self] = func(self)
return values[self]
wrapper.__name__ = func.__name__
return property(wrapper)
class State:
#lazyproperty
def positions(self):
print 'loading positions'
return {1, 2, 3}
s = State()
print s.positions
print s.positions
Which prints:
loading positions
set([1, 2, 3])
set([1, 2, 3])
Caveat: entries in the values dictionary won't be garbage collected, so it's not suitable for long-running programs. If the loaded value is immutable across all classes, it can be stored on the function object itself for better speed and memory use:
try:
return func.value
except AttributeError:
func.value = func(self)
return func.value
I think you can remove even more boilerplate by writing a custom descriptor class that decorates the loader method. The idea is to have the descriptor itself encode the lazy-loading logic, meaning that the only thing you define in an actual method is the loader itself (which is the only thing that, apparently, really does have to vary for different values). Here's an example:
class LazyDesc(object):
def __init__(self, func):
self.loader = func
self.secretAttr = '_' + func.__name__
def __get__(self, obj, cls):
try:
return getattr(obj, self.secretAttr)
except AttributeError:
print("Lazily loading", self.secretAttr)
self.loader(obj)
return getattr(obj, self.secretAttr)
class State(object):
#LazyDesc
def positions(self):
self._positions = {'some': 'positions'}
#LazyDesc
def forces(self):
self._forces = {'some': 'forces'}
Then:
>>> x = State()
>>> x.forces
Lazily loading _forces
{'some': 'forces'}
>>> x.forces
{'some': 'forces'}
>>> x.positions
Lazily loading _positions
{'some': 'positions'}
>>> x.positions
{'some': 'positions'}
Notice that the "lazy loading" message was printed only on the first access for each attribute. This version also auto-creates the "secret" attribute to hold the real data by prepending an underscore to the method name (i.e., data for positions is stored in _positions. In this example, there's no setter, so you can't do x.positions = blah (although you can still mutate the positions with x.positions['key'] = val), but the approach could be extended to allow setting as well.
The nice thing about this approach is that your lazy logic is transparently encoded in the descriptor __get__, meaning that it easily generalizes to other kinds of boilerplate that you might want to abstract away in a similar manner.
However, I cannot seem to find a way to trace exactly what method are
being called in the #property decorator.
property is actually a type (whether you use it with the decorator syntax of not is orthogonal), which implements the descriptor protocol (https://docs.python.org/2/howto/descriptor.html). An overly simplified (I skipped the deleter, doc and quite a few other things...) pure-python implementation would look like this:
class property(object):
def __init__(self, fget=None, fset=None):
self.fget = fget
self.fset = fset
def setter(self, func):
self.fset = func
return func
def __get__(self, obj, type=None):
return self.fget(obj)
def __set__(self, obj, value):
if self.fset:
self.fset(obj, value)
else:
raise AttributeError("Attribute is read-only")
Now overloading property is not necessarily the simplest solution. In fact there are actually quite a couple existing implementations out there, including Django's "cached_property" (cf http://ericplumb.com/blog/understanding-djangos-cached_property-decorator.html for more about it) and pydanny's "cached-property" package (https://pypi.python.org/pypi/cached-property/0.1.5)

Python "callable" attribute (pseudo-property)

In python, I can alter the state of an instance by directly assigning to attributes, or by making method calls which alter the state of the attributes:
foo.thing = 'baz'
or:
foo.thing('baz')
Is there a nice way to create a class which would accept both of the above forms which scales to large numbers of attributes that behave this way? (Shortly, I'll show an example of an implementation that I don't particularly like.) If you're thinking that this is a stupid API, let me know, but perhaps a more concrete example is in order. Say I have a Document class. Document could have an attribute title. However, title may want to have some state as well (font,fontsize,justification,...), but the average user might be happy enough just setting the title to a string and being done with it ...
One way to accomplish this would be to:
class Title(object):
def __init__(self,text,font='times',size=12):
self.text = text
self.font = font
self.size = size
def __call__(self,*text,**kwargs):
if(text):
self.text = text[0]
for k,v in kwargs.items():
setattr(self,k,v)
def __str__(self):
return '<title font={font}, size={size}>{text}</title>'.format(text=self.text,size=self.size,font=self.font)
class Document(object):
_special_attr = set(['title'])
def __setattr__(self,k,v):
if k in self._special_attr and hasattr(self,k):
getattr(self,k)(v)
else:
object.__setattr__(self,k,v)
def __init__(self,text="",title=""):
self.title = Title(title)
self.text = text
def __str__(self):
return str(self.title)+'<body>'+self.text+'</body>'
Now I can use this as follows:
doc = Document()
doc.title = "Hello World"
print (str(doc))
doc.title("Goodbye World",font="Helvetica")
print (str(doc))
This implementation seems a little messy though (with __special_attr). Maybe that's because this is a messed up API. I'm not sure. Is there a better way to do this? Or did I leave the beaten path a little too far on this one?
I realize I could use #property for this as well, but that wouldn't scale well at all if I had more than just one attribute which is to behave this way -- I'd need to write a getter and setter for each, yuck.
It is a bit harder than the previous answers assume.
Any value stored in the descriptor will be shared between all instances, so it is not the right place to store per-instance data.
Also, obj.attrib(...) is performed in two steps:
tmp = obj.attrib
tmp(...)
Python doesn't know in advance that the second step will follow, so you always have to return something that is callable and has a reference to its parent object.
In the following example that reference is implied in the set argument:
class CallableString(str):
def __new__(class_, set, value):
inst = str.__new__(class_, value)
inst._set = set
return inst
def __call__(self, value):
self._set(value)
class A(object):
def __init__(self):
self._attrib = "foo"
def get_attrib(self):
return CallableString(self.set_attrib, self._attrib)
def set_attrib(self, value):
try:
value = value._value
except AttributeError:
pass
self._attrib = value
attrib = property(get_attrib, set_attrib)
a = A()
print a.attrib
a.attrib = "bar"
print a.attrib
a.attrib("baz")
print a.attrib
In short: what you want cannot be done transparently. You'll write better Python code if you don't insist hacking around this limitation
You can avoid having to use #property on potentially hundreds of attributes by simply creating a descriptor class that follows the appropriate rules:
# Warning: Untested code ahead
class DocAttribute(object):
tag_str = "<{tag}{attrs}>{text}</{tag}>"
def __init__(self, tag_name, default_attrs=None):
self._tag_name = tag_name
self._attrs = default_attrs if default_attrs is not None else {}
def __call__(self, *text, **attrs):
self._text = "".join(text)
self._attrs.update(attrs)
return self
def __get__(self, instance, cls):
return self
def __set__(self, instance, value):
self._text = value
def __str__(self):
# Attrs left as an exercise for the reader
return self.tag_str.format(tag=self._tag_name, text=self._text)
Then you can use Document's __setattr__ method to add a descriptor based on this class if it is in a white list of approved names (or not in a black list of forbidden ones, depending on your domain):
class Document(object):
# prelude
def __setattr__(self, name, value):
if self.is_allowed(name): # Again, left as an exercise for the reader
object.__setattr__(self, name, DocAttribute(name)(value))

Namespaces inside class in Python3

I am new to Python and I wonder if there is any way to aggregate methods into 'subspaces'. I mean something similar to this syntax:
smth = Something()
smth.subspace.do_smth()
smth.another_subspace.do_smth_else()
I am writing an API wrapper and I'm going to have a lot of very similar methods (only different URI) so I though it would be good to place them in a few subspaces that refer to the API requests categories. In other words, I want to create namespaces inside a class. I don't know if this is even possible in Python and have know idea what to look for in Google.
I will appreciate any help.
One way to do this is by defining subspace and another_subspace as properties that return objects that provide do_smth and do_smth_else respectively:
class Something:
#property
def subspace(self):
class SubSpaceClass:
def do_smth(other_self):
print('do_smth')
return SubSpaceClass()
#property
def another_subspace(self):
class AnotherSubSpaceClass:
def do_smth_else(other_self):
print('do_smth_else')
return AnotherSubSpaceClass()
Which does what you want:
>>> smth = Something()
>>> smth.subspace.do_smth()
do_smth
>>> smth.another_subspace.do_smth_else()
do_smth_else
Depending on what you intend to use the methods for, you may want to make SubSpaceClass a singleton, but i doubt the performance gain is worth it.
I had this need a couple years ago and came up with this:
class Registry:
"""Namespace within a class."""
def __get__(self, obj, cls=None):
if obj is None:
return self
else:
return InstanceRegistry(self, obj)
def __call__(self, name=None):
def decorator(f):
use_name = name or f.__name__
if hasattr(self, use_name):
raise ValueError("%s is already registered" % use_name)
setattr(self, name or f.__name__, f)
return f
return decorator
class InstanceRegistry:
"""
Helper for accessing a namespace from an instance of the class.
Used internally by :class:`Registry`. Returns a partial that will pass
the instance as the first parameter.
"""
def __init__(self, registry, obj):
self.__registry = registry
self.__obj = obj
def __getattr__(self, attr):
return partial(getattr(self.__registry, attr), self.__obj)
# Usage:
class Something:
subspace = Registry()
another_subspace = Registry()
#MyClass.subspace()
def do_smth(self):
# `self` will be an instance of Something
pass
#MyClass.another_subspace('do_smth_else')
def this_can_be_called_anything_and_take_any_parameter_name(obj, other):
# Call it `obj` or whatever else if `self` outside a class is unsettling
pass
At runtime:
>>> smth = Something()
>>> smth.subspace.do_smth()
>>> smth.another_subspace.do_smth_else('other')
This is compatible with Py2 and Py3. Some performance optimizations are possible in Py3 because __set_name__ tells us what the namespace is called and allows caching the instance registry.

Categories

Resources