converting a subclass with __slots__ to json - python

I am a total noob in Python and just couldn't help myself but did it again and dreamed of something I just couldn't achieve.
I wanted to have a class, which can be instantiated as such:
my_class = MyClass(**params)
and be consumed as such, in Flask:
jsonify(my_class)
The expected outcome would be a JSON:
{ "key" : "value", ... }
Now, the implementation of MyClass is,
class MyClas(NamedMutableSequence, Document):
__slots__ = (
'key_1',
'key_2',
'key_3'
)
def __init__(self, **params):
NamedMutableSequence.__init__(self, **params)
Document.__init__(self, 'myclass')
def save(self):
self._db_col.update({'key_1': self.key_1}, {'key_2': self.key_2, 'key_3': self.key_3})
By now, you are wondering what NamedMutableSequence and Document are...
class NamedMutableSequence(Sequence):
___slots__ = ()
def __init__(self, *positional_values, **keyword_values):
subclass_propeties = self.__slots__
for key in subclass_propeties:
setattr(self, key, keyword_values.get(key))
if positional_values:
for key, value in zip(subclass_propeties, positional_values):
setattr(self, key, value)
def __str__(self):
values = ', '.join('%s=%r' % (key, getattr(self, key)) for key in self.__slots__)
return '%s(%s)' % (clsname, values)
__repr__ = __str__
def __getitem__(self, item):
return getattr(self, item)
def __setitem__(self, item, value):
return setattr(self, item, value)
def __len__(self):
return len(self.__slots__)
Admittedly, I just copied someone's solution to a mutable namedtuple for this base class and fixed __getitem__ & __setitem__ to allow my_class.key_1 = 'some value'
class Document():
__slots__ = ('_db_col')
def __init__(self, collection):
self._db_col = mongo_db[collection]
This is just what I spew out in attempt for a base class which I will be using throughout my model classes for db connection.
This is, in my opinion, where it starts I got too over myself and just created a mess. Because no matter what I try, I can't stop raising TypeError: {string value of my_class} is not JSON serializable.
To make matters worse, when I try to dict(my_class), I get a shiny attributes must be string error raised on getattr().
I would still like to keep the base classes and I still need to make it JSON serializable.
How can I save myself?

I found an answer finally, and the solution was found from another stackoverflow post (How can I convert python class with slots to dictionary?)
What I did was just to add another method on the NamedMutableSequence as such:
def json(self):
return {key : getattr(self, key, None) for key in self.__slots__}
and just call it when I need a JSON parsable dictionary, as such:
my_class = MyClass(**params)
jsonify(my_class.json())

Related

How do you pass arguments to __metaclass__ in Python

I have a metaclass for Row data. Any records at all should inherent from this class, so you can see below I'm trying to inherent twice, once for programs and once for assets. But I need to pass an OrderedDict to the metaclass or alteranative move the slot and init functions to the actual classes... but that seems like a waste of space.
###############################################################
#NORMALIZED CLASS ROWS
###############################################################
class MetaNormRow(type, OrderedDict):
__slots__ = list(OrderedDict.keys())
def __init__(self, **kwargs):
for arg, default in OrderedDict.items():
setattr(self, arg, re.sub(r'[^\x00-\x7F]', '', kwargs.get(arg, default)))
print (str(arg) + " : "+ str(re.sub(r'[^\x00-\x7F]', '', kwargs.get(arg, default))))
def items(self):
for slot in self.__slots__:
yield slot, getattr(self, slot)
def values(self):
for slot in self.__slots__:
yield getattr(self, slot)
class NormAsset(object):
__metaclass__ = MetaNormRow(DefaultAsset)
class NormProg(object):
__metaclass__ = MetaNormRow(DefaultProgs)
Here is how I will use the NormAsset and Prog classes:
kwargs = {
"status": norm_status,
"computer_name": norm_comp_name,
"domain_name": norm_domain,
"serial_num": norm_serial,
"device_type": norm_device_type,
"mfr": norm_mfr,
"model": norm_model,
"os_type": norm_os_type,
"os_ver": norm_os_ver,
"os_subver": norm_os_subver,
"location_code": norm_location_code,
"tan_id": tan_id,
"tan_comp_name": tan_comp_name,
"tan_os": tan_os,
"tan_os_build": tan_os_build,
"tan_os_sp": tan_os_sp,
"tan_country_code": tan_country_code,
"tan_mfr": tan_mfr,
"tan_model": tan_model,
"tan_serial": tan_serial
}
norm_tan_dict[norm_comp_name] = rows.NormAsset(**kwargs)
To clarify, the following functions works 100%... but I need like 10 of these, the only thing that differs is the DefaultAsset diction... so I feel there should be a way to do this without repeating this for every class... the whole point of class inheretance:
class NormAsset(object):
__slots__ = list(DefaultAsset.keys())
def __init__(self, **kwargs):
for arg, default in DefaultAsset.items():
setattr(self, arg, re.sub(r'[^\x00-\x7F]', '', kwargs.get(arg, default)))
#print (str(arg) + " : "+ str(re.sub(r'[^\x00-\x7F]', '', kwargs.get(arg, default))))
def items(self):
for slot in self.__slots__:
yield slot, getattr(self, slot)
def values(self):
for slot in self.__slots__:
yield getattr(self, slot)
What you need is just ordinary class inheritance, and maybe an ordinary function to work as a factory for your classes.
Since the only thing you need is the ordered list of the keys, that gets present in the classes' slots, that is just it - you can build a base class with the code you already have, and simply inherit from it for your class structures.
If you want the full functionality of a mapping, like dict, being able to retrieve elements, get the length, and so on, I'd recommend inheriting from collections.abc.MutableMapping instead of OrderedDict. Even because, if you inherit form OrderedDict, the __slots__ declaration will be worthless - dicts and OrderedDict keep their data arranged in ways that are not acessible through Python code - you can keep your code solely in each class __slots__.
Also, collections.abc.MutableMapping is crafted in a way that it requires you to implement a minimum set of methods, from which it derives all functionality from a dict.
So, adapting your last example class, you'd have something like this.
from collections.abc import MutableMapping
class BaseAsset(MutableMapping):
# Base class for slotted classes need to have __slots__.
__slots__ = []
default_asset = None
def __init__(self, **kwargs):
for arg, default in self.__class__.default_asset.items():
value = kwargs.get(arg, default)
setattr(self, arg, re.sub(r'[^\x00-\x7F]', '', value))
def __getitem__(self, item):
return getattr(self, item)
def __setitem__(self, item, value):
if item in self.__slots__:
return setattr(self, item, value)
raise KeyError
def __delitem__(self, item):
if item in self.__slots__:
return delattr(self, item)
raise KeyError
def __iter__(self):
yield from iter(self.__slots__)
def __len__(self):
return len(self.__slots__)
def __repr__(self):
return f"{self.__class__.__name__}(**{dict(self)})"
def asset_class_factory(name, DefaultAsset):
class CustomAsset(BaseAsset):
__slots__ = list(DefaultAsset.keys())
default_asset = DefaultAsset
CustomAsset.__name__ = name
return CustomAsset
And this is it working:
In [182]: d = {"banana": "nanica"}
In [183]: FruitClass = asset_class_factory("FruitClass", d)
In [184]: f = FruitClass()
In [185]: f.banana
Out[185]: 'nanica'
In [186]: f
Out[186]: FruitClass(**{'banana': 'nanica'}

python getting object properties with __dict__

Using py3, I have an object that uses the #property decorator
class O(object):
def __init__(self):
self._a = None
#property
def a(self):
return 1
accessing the attribute a via __dict__ (with _a) doesn't seem to return the property decorated value but the initialized value None
o = O()
print(o.a, o.__dict__['_a'])
>>> 1, None
Is there a generic way to make this work? I mostly need this for
def __str__(self):
return ' '.join('{}: {}'.format(key, val) for key, val in self.__dict__.items())
Of course self.__dict__["_a"] will return self._a (well actually it's the other way round - self._a will return self.__dict__["_a"] - but anyway), not self.a. The only thing the property is doing here is to automatically invoke it's getter (your a(self) function) so you don't have to type the parens, otherwise it's just a plain method call.
If you want something that works with properties too, you'll have to get those manually from dir(self.__class__) and getattr(self.__class__, name), ie:
def __str__(self):
# py2
attribs = self.__dict__.items()
# py3
# attribs = list(self.__dict__.items())
for name in dir(self.__class__):
obj = getattr(self.__class__, name)
if isinstance(obj, property):
val = obj.__get__(self, self.__class__)
attribs.append((name, val))
return ' '.join('{}: {}'.format(key, val) for key, val in attribs)
Note that this won't prevent _a to appears in attribs - if you want to avoid this you'll also have to filter out protected names from the attribs list (all protected names, since you ask for something generic):
def __str__(self):
attribs = [(k, v) for k, v in self.__dict__.items() if not k.startswith("_")]
for name in dir(self.__class__):
# a protected property is somewhat uncommon but
# let's stay consistent with plain attribs
if name.startswith("_"):
continue
obj = getattr(self.__class__, name)
if isinstance(obj, property):
val = obj.__get__(self, self.__class__)
attribs.append((name, val))
return ' '.join('{}: {}'.format(key, val) for key, val in attribs)
Also note that this won't handle other computed attributes (property is just one generic implementation of the descriptor protocol). At this point, your best bet for something that's still as generic as possible but that can be customised if needed is to implement the above as a mixin class with a couple hooks for specialization:
class PropStrMixin(object):
# add other descriptor types you want to include in the
# attribs list
_COMPUTED_ATTRIBUTES_CLASSES = [property,]
def _get_attr_list(self):
attribs = [(k, v) for k, v in self.__dict__.items() if not k.startswith("_")]
for name in dir(self.__class__):
# a protected property is somewhat uncommon but
# let's stay consistent with plain attribs
if name.startswith("_"):
continue
obj = getattr(self.__class__, name)
if isinstance(obj, *self._COMPUTED_ATTRIBUTES_CLASSES):
val = obj.__get__(self, self.__class__)
attribs.append((name, val))
return attribs
def __str__(self):
attribs = self._get_attr_list()
return ' '.join('{}: {}'.format(key, val) for key, val in attribs)
class YouClass(SomeParent, PropStrMixin):
# here you can add to _COMPUTED_ATTRIBUTES_CLASSES
_COMPUTED_ATTRIBUTES_CLASSES = PropStrMixin + [SomeCustomDescriptor])
Property is basically a "computed attribute". In general, the property's value is not stored anywhere, it is computed on demand. That's why you cannot find it in the __dict__.
#property decorator replaces the class method by a descriptor object which then calls the original method as its getter. This happens at the class level.
The lookup for o.a starts at the instance. It does not exist there, the class is checked in the next step. O.a exists and is a descriptor (because it has special methods for the descriptor protocol), so the descriptor's getter is called and the returned value is used.
(EDITED)
There is not a general way to dump the name:value pairs for the descriptors. Classes including the bases must be inspected, this part is not difficult. However retrieving the values is equivalent to a function call and may have unexpected and undesirable side-effects. For a different perspective I'd like to quote a comment by bruno desthuilliers here: "property get should not have unwanted side effects (if it does then there's an obvious design error)".
You can also update self._a as getter since the return of the getter should always reflect what self._a is stored:
class O(object):
def __init__(self):
self._a = self.a
#property
def a(self):
self._a = 1
return self._a
A bit redundant, maybe, but setting self._a = None initially is useless in this case.
In case you need a setter
This would also be compatible given remove the first line in getter:
#a.setter
def a(self, value):
self._a = value

Ruby like DSL in Python

I'm currently writing my first bigger project in Python, and I'm now wondering how to define a class method so that you can execute it in the class body of a subclass of the class.
First to give some more context, a slacked down (I removed everything non essential for this question) example of how I'd do the thing I'm trying to do in Ruby:
If I define a class Item like this:
class Item
def initialize(data={})
#data = data
end
def self.define_field(name)
define_method("#{name}"){ instance_variable_get("#data")[name.to_s] }
define_method("#{name}=") do |value|
instance_variable_get("#data")[name.to_s] = value
end
end
end
I can use it like this:
class MyItem < Item
define_field("name")
end
item = MyItem.new
item.name = "World"
puts "Hello #{item.name}!"
Now so far I tried achieving something similar in Python, but I'm not happy with the result I've got so far:
class ItemField(object):
def __init__(self, name):
self.name = name
def __get__(self, item, owner=None):
return item.values[self.name]
def __set__(self, item, value):
item.values[self.name] = value
def __delete__(self, item):
del item.values[self.name]
class Item(object):
def __init__(self, data=None):
if data == None: data = {}
self.values = data
for field in type(self).fields:
self.values[field.name] = None
setattr(self, field.name, field)
#classmethod
def define_field(cls, name):
if not hasattr(cls, "fields"): cls.fields = []
cls.fields.append(ItemField(name, default))
Now I don't know how I can call define_field from withing a subclass's body. This is what I wished that it was possible:
class MyItem(Item):
define_field("name")
item = MyItem({"name": "World"})
puts "Hello {}!".format(item.name)
item.name = "reader"
puts "Hello {}!".format(item.name)
There's this similar question but none of the answers are really satisfying, somebody recommends caling the function with __func__() but I guess I can't do that, because I can't get a reference to the class from within its anonymous body (please correct me if I'm wrong about this.)
Somebody else pointed out that it's better to use a module level function for doing this which I also think would be the easiest way, however the main intention of me doing this is to make the implementation of subclasses clean and having to load that module function wouldn't be to nice either. (Also I'd have to do the function call outside the class body and I don't know but I think this is messy.)
So basically I think my approach is wrong, because Python wasn't designed to allow this kind of thing to be done. What would be the best way to achieve something as in the Ruby example with Python?
(If there's no better way I've already thought about just having a method in the subclass which returns an array of the parameters for the define_field method.)
Perhaps calling a class method isn't the right route here. I'm not quite up to speed on exactly how and when Python creates classes, but my guess is that the class object doesn't yet exist when you'd call the class method to create an attribute.
It looks like you want to create something like a record. First, note that Python allows you to add attributes to your user-created classes after creation:
class Foo(object):
pass
>>> foo = Foo()
>>> foo.x = 42
>>> foo.x
42
Maybe you want to constrain which attributes the user can set. Here's one way.
class Item(object):
def __init__(self):
if type(self) is Item:
raise NotImplementedError("Item must be subclassed.")
def __setattr__(self, name, value):
if name not in self.fields:
raise AttributeError("Invalid attribute name.")
else:
self.__dict__[name] = value
class MyItem(Item):
fields = ("foo", "bar", "baz")
So that:
>>> m = MyItem()
>>> m.foo = 42 # works
>>> m.bar = "hello" # works
>>> m.test = 12 # raises AttributeError
Lastly, the above allows you the user subclass Item without defining fields, like such:
class MyItem(Item):
pass
This will result in a cryptic attribute error saying that the attribute fields could not be found. You can require that the fields attribute be defined at the time of class creation by using metaclasses. Furthermore, you can abstract away the need for the user to specify the metaclass by inheriting from a superclass that you've written to use the metaclass:
class ItemMetaclass(type):
def __new__(cls, clsname, bases, dct):
if "fields" not in dct:
raise TypeError("Subclass must define 'fields'.")
return type.__new__(cls, clsname, bases, dct)
class Item(object):
__metaclass__ = ItemMetaclass
fields = None
def __init__(self):
if type(self) == Item:
raise NotImplementedError("Must subclass Type.")
def __setattr__(self, name, value):
if name in self.fields:
self.__dict__[name] = value
else:
raise AttributeError("The item has no such attribute.")
class MyItem(Item):
fields = ("one", "two", "three")
You're almost there! If I understand you correctly:
class Item(object):
def __init__(self, data=None):
fields = data or {}
for field, value in data.items():
if hasattr(self, field):
setattr(self, field, value)
#classmethod
def define_field(cls, name):
setattr(cls, name, None)
EDIT: As far as I know, it's not possible to access the class being defined while defining it. You can however call the method on the __init__ method:
class Something(Item):
def __init__(self):
type(self).define_field("name")
But then you're just reinventing the wheel.
When defining a class, you cannot reference the class itself inside its own definition block. So you have to call define_field(...) on MyItem after its definition. E.g.,
class MyItem(Item):
pass
MyItem.define_field("name")
item = MyItem({"name": "World"})
print("Hello {}!".format(item.name))
item.name = "reader"
print("Hello {}!".format(item.name))

python __getattribute__ override and #property decorator

I had to write a class of some sort that overrides __getattribute__.
basically my class is a container, which saves every user-added property to self._meta which is a dictionary.
class Container(object):
def __init__(self, **kwargs):
super(Container, self).__setattr__('_meta', OrderedDict())
#self._meta = OrderedDict()
super(Container, self).__setattr__('_hasattr', lambda key : key in self._meta)
for attr, value in kwargs.iteritems():
self._meta[attr] = value
def __getattribute__(self, key):
try:
return super(Container, self).__getattribute__(key)
except:
if key in self._meta : return self._meta[key]
else:
raise AttributeError, key
def __setattr__(self, key, value):
self._meta[key] = value
#usage:
>>> a = Container()
>>> a
<__main__.Container object at 0x0000000002B2DA58>
>>> a.abc = 1 #set an attribute
>>> a._meta
OrderedDict([('abc', 1)]) #attribute is in ._meta dictionary
I have some classes which inherit Container base class and some of their methods have #property decorator.
class Response(Container):
#property
def rawtext(self):
if self._hasattr("value") and self.value is not None:
_raw = self.__repr__()
_raw += "|%s" %(self.value.encode("utf-8"))
return _raw
problem is that .rawtext isn't accessible. (I get attributeerror.) every key in ._meta is accessible, every attributes added by __setattr__ of object base class is accessible, but method-to-properties by #property decorator isn't. I think it has to do with my way of overriding __getattribute__ in Container base class. What should I do to make properties from #property accessible?
I think you should probably think about looking at __getattr__ instead of __getattribute__ here. The difference is this: __getattribute__ is called inconditionally if it exists -- __getattr__ is only called if python can't find the attribute via other means.
I completely agree with mgilson. If you want a sample code which should be equivalent to your code but work well with properties you can try:
class Container(object):
def __init__(self, **kwargs):
self._meta = OrderedDict()
#self._hasattr = lambda key: key in self._meta #???
for attr, value in kwargs.iteritems():
self._meta[attr] = value
def __getattr__(self, key):
try:
return self._meta[key]
except KeyError:
raise AttributeError(key)
def __setattr__(self, key, value):
if key in ('_meta', '_hasattr'):
super(Container, self).__setattr__(key, value)
else:
self._meta[key] = value
I really do not understand your _hasattr attribute. You put it as an attribute but it's actually a function that has access to self... shouldn't it be a method?
Actually I think you should simple use the built-in function hasattr:
class Response(Container):
#property
def rawtext(self):
if hasattr(self, 'value') and self.value is not None:
_raw = self.__repr__()
_raw += "|%s" %(self.value.encode("utf-8"))
return _raw
Note that hasattr(container, attr) will return True also for _meta.
An other thing that puzzles me is why you use an OrderedDict. I mean, you iterate over kwargs, and the iteration has random order since it's a normal dict, and add the items in the OrderedDict. Now you have _meta which contains the values in random order.
If you aren't sure whether you need to have a specific order or not, simply use dict and eventually swap to OrderedDict later.
By the way: never ever use an try: ... except: without specifying the exception to catch. In your code you actually wanted to catch only AttributeErrors so you should have done:
try:
return super(Container, self).__getattribute__(key)
except AttributeError:
#stuff

Subclassing ctypes - Python

This is some code I found on the internet. I'm not sure how it is meant to be used. I simply filled members with the enum keys/values and it works, but I'm curious what this metaclass is all about. I am assuming it has something to do with ctypes, but I can't find much information on subclassing ctypes. I know EnumerationType isn't doing anything the way I'm using Enumeration.
from ctypes import *
class EnumerationType(type(c_uint)):
def __new__(metacls, name, bases, dict):
if not "_members_" in dict:
_members_ = {}
for key,value in dict.items():
if not key.startswith("_"):
_members_[key] = value
dict["_members_"] = _members_
cls = type(c_uint).__new__(metacls, name, bases, dict)
for key,value in cls._members_.items():
globals()[key] = value
return cls
def __contains__(self, value):
return value in self._members_.values()
def __repr__(self):
return "<Enumeration %s>" % self.__name__
class Enumeration(c_uint):
__metaclass__ = EnumerationType
_members_ = {}
def __init__(self, value):
for k,v in self._members_.items():
if v == value:
self.name = k
break
else:
raise ValueError("No enumeration member with value %r" % value)
c_uint.__init__(self, value)
#classmethod
def from_param(cls, param):
if isinstance(param, Enumeration):
if param.__class__ != cls:
raise ValueError("Cannot mix enumeration members")
else:
return param
else:
return cls(param)
def __repr__(self):
return "<member %s=%d of %r>" % (self.name, self.value, self.__class__)
And an enumeration probably done the wrong way.
class TOKEN(Enumeration):
_members_ = {'T_UNDEF':0, 'T_NAME':1, 'T_NUMBER':2, 'T_STRING':3, 'T_OPERATOR':4, 'T_VARIABLE':5, 'T_FUNCTION':6}
A metaclass is a class used to create classes. Think of it this way: all objects have a class, a class is also an object, therefore, it makes sense that a class can have a class.
http://www.ibm.com/developerworks/linux/library/l-pymeta.html
To understand what this is doing, you can look at a few points in the code.
_members_ = {'T_UNDEF':0, 'T_NAME':1, 'T_NUMBER':2, 'T_STRING':3, 'T_OPERATOR':4, 'T_VARIABLE':5, 'T_FUNCTION':6}
globals()[key] = value
Here it takes every defined key in your dictionary: "T_UNDEF" "T_NUMBER" and makes them available in your globals dictionary.
def __init__(self, value):
for k,v in self._members_.items():
if v == value:
self.name = k
break
Whenever you make an instance of your enum, it will check to see if the "value" is in your list of allowable enum names when you initialized the class. When the value is found, it sets the string name to self.name.
c_uint.__init__(self, value)
This is the actual line which sets the "ctypes value" to an actual c unsigned integer.
That is indeed a weird class.
The way you are using it is correct, although another way would be:
class TOKEN(Enumeration):
T_UNDEF = 0
T_NAME = 1
T_NUMBER = 2
T_STRING = 3
T_OPERATOR = 4
T_VARIABLE = 5
T_FUNCTION = 6
(That's what the first 6 lines in __new__ are for)
Then you can use it like so:
>>> TOKEN
<Enumeration TOKEN>
>>> TOKEN(T_NAME)
<member T_NAME=1 of <Enumeration TOKEN>>
>>> T_NAME in TOKEN
True
>>> TOKEN(1).name
'T_NAME'
The from_param method seems to be for convenience, for writing methods that accept either an int or an Enumeration object. Not really sure if that's really its purpose.
I think this class is meant to be used when working with external APIs the use c-style enums, but it looks like a whole lot of work for very little gain.

Categories

Resources