Python: Defining a class with only Integers Defined - python

I am defining a class where only a set of integers is used.
I cannot use the following datatypes in defining my class: set, frozenset and dictionaries.
i need help defining:
remove(self,i): Integer i is removed from the set. An exception is raised if i is not in self.
discard(self, i): integer i is removed from the set. No exception is raised if i is not in self

Assuming you are using an internal list based on what you've said, you could do it like so:
class Example(object):
def __init__(self):
self._list = list()
# all your other methods here...
def remove(self, i):
try:
self._list.remove(i)
except ValueError:
raise ValueError("i is not in the set.")
def discard(self, i):
try:
self._list.remove(i)
except ValueError:
pass
remove() tries to remove the element and catches the list's ValueError so it can throw its own. discard() does the same but instead does nothing if a ValueError occurs.

I cannot use the following datatypes in defining my class: set, frozenset and dictionaries.
It looks like you are going to use list.
You can use list's remove method and handle exceptions in appropriate way.

Here's highly inefficient but complete implementation using MutableSet ABC:
import collections
class MySet(collections.MutableSet):
def __init__(self, iterable=tuple()):
self._items = []
for value in iterable:
self.add(value)
def discard(self, value):
try: self._items.remove(value)
except ValueError:
pass
def add(self, value):
if value not in self:
self._items.append(value)
def __iter__(self):
return iter(self._items)
def __len__(self):
return len(self._items)
def __contains__(self, value):
return value in self._items
From collections.MutableSet source:
def remove(self, value):
if value not in self:
raise KeyError(value)
self.discard(value)

Here is something I did with duplication, take some ideas from it
combList = list1 + list2
combList.sort()
last = combList[-1]
for i in range(len(combList)-2, -1, -1):
if last == combList[i]:
del combList[i]
else:
last = combList[i]
combList.sort()
for i in range(len(combList)):
print i+1, combList[i]
I totally agreed with LiOliQ, the only way is to do it as a list.

Related

Enable Python Class to support for loop through an internal iterable member variable

from sortedcontainers import SortedSet
class BigSet(object):
def __init__(self):
self.set = SortedSet()
self.current_idx = -1
def __getitem__(self, index):
try:
return self.set[index]
except IndexError as e:
print('Exception: Index={0} len={1}'.format(index, len(self.ord_set)))
raise StopIteration
def add(self, element):
self.set.add(element)
def __len__(self):
return len(self.set)
def __iter__(self):
self.current_idx = -1
return self
def __next__(self):
self.current_idx += 1
if self.current_idx == len(self.set):
raise StopIteration
else:
return self.set[self.current_idx]
def main():
big = BigSet()
big.add(1)
big.add(2)
big.add(3)
for b in big:
print(b)
for b2 in big:
print(b2)
if __name__ == "__main__":
main()
I have a class that embeds an iterable member variable named self.set and I would like to enable this class to support for loop. The above is the code that I wrote for the purpose. However, I think there must be a better way to do this task easier since the class has an iterable member already.
Question> Is there a way that I can delegate the job to the embedded self.set? Also, I think there maybe a good way to implement the __getitem__ too.
Thank you
I am almost certain you do not want your container to be an iterator. So, it should not implement __next__. Instead, it should be iterable, so it only needs to implement __iter__. In that case, if you want to delegate to the iterable member:
def __iter__(self):
return iter(self.set)
And remove your __next__ method.

python __getattribute__ override and #property decorator

I had to write a class of some sort that overrides __getattribute__.
basically my class is a container, which saves every user-added property to self._meta which is a dictionary.
class Container(object):
def __init__(self, **kwargs):
super(Container, self).__setattr__('_meta', OrderedDict())
#self._meta = OrderedDict()
super(Container, self).__setattr__('_hasattr', lambda key : key in self._meta)
for attr, value in kwargs.iteritems():
self._meta[attr] = value
def __getattribute__(self, key):
try:
return super(Container, self).__getattribute__(key)
except:
if key in self._meta : return self._meta[key]
else:
raise AttributeError, key
def __setattr__(self, key, value):
self._meta[key] = value
#usage:
>>> a = Container()
>>> a
<__main__.Container object at 0x0000000002B2DA58>
>>> a.abc = 1 #set an attribute
>>> a._meta
OrderedDict([('abc', 1)]) #attribute is in ._meta dictionary
I have some classes which inherit Container base class and some of their methods have #property decorator.
class Response(Container):
#property
def rawtext(self):
if self._hasattr("value") and self.value is not None:
_raw = self.__repr__()
_raw += "|%s" %(self.value.encode("utf-8"))
return _raw
problem is that .rawtext isn't accessible. (I get attributeerror.) every key in ._meta is accessible, every attributes added by __setattr__ of object base class is accessible, but method-to-properties by #property decorator isn't. I think it has to do with my way of overriding __getattribute__ in Container base class. What should I do to make properties from #property accessible?
I think you should probably think about looking at __getattr__ instead of __getattribute__ here. The difference is this: __getattribute__ is called inconditionally if it exists -- __getattr__ is only called if python can't find the attribute via other means.
I completely agree with mgilson. If you want a sample code which should be equivalent to your code but work well with properties you can try:
class Container(object):
def __init__(self, **kwargs):
self._meta = OrderedDict()
#self._hasattr = lambda key: key in self._meta #???
for attr, value in kwargs.iteritems():
self._meta[attr] = value
def __getattr__(self, key):
try:
return self._meta[key]
except KeyError:
raise AttributeError(key)
def __setattr__(self, key, value):
if key in ('_meta', '_hasattr'):
super(Container, self).__setattr__(key, value)
else:
self._meta[key] = value
I really do not understand your _hasattr attribute. You put it as an attribute but it's actually a function that has access to self... shouldn't it be a method?
Actually I think you should simple use the built-in function hasattr:
class Response(Container):
#property
def rawtext(self):
if hasattr(self, 'value') and self.value is not None:
_raw = self.__repr__()
_raw += "|%s" %(self.value.encode("utf-8"))
return _raw
Note that hasattr(container, attr) will return True also for _meta.
An other thing that puzzles me is why you use an OrderedDict. I mean, you iterate over kwargs, and the iteration has random order since it's a normal dict, and add the items in the OrderedDict. Now you have _meta which contains the values in random order.
If you aren't sure whether you need to have a specific order or not, simply use dict and eventually swap to OrderedDict later.
By the way: never ever use an try: ... except: without specifying the exception to catch. In your code you actually wanted to catch only AttributeErrors so you should have done:
try:
return super(Container, self).__getattribute__(key)
except AttributeError:
#stuff

Subclass Python list to Validate New Items

I want a python list which represents itself externally as an average of its internal list items, but otherwise behaves as a list. It should raise a TypeError if an item is added that can't be cast to a float.
The part I'm stuck on is raising TypeError. It should be raised for invalid items added via any list method, like .append, .extend, +=, setting by slice, etc.
Is there a way to intercept new items added to the list and validate them?
I tried re-validating the whole list in __getattribute__, but when its called I only have access to the old version of the list, plus it doesn't even get called initialization, operators like +=, or for slices like mylist[0] = 5.
Any ideas?
Inherit from MutableSequence and implement the methods it requires as well as any others that fall outside of the scope of Sequences alone -- like the operators here. This will allow you to change the operator manipulations for list-like capabilities while automatically generating iterators and contains capabilities.
If you want to check for slices btw you need to do isinstance(key, slice) in your __getitem__ (and/or __setitem__) methods. Note that a single index like myList[0] is not a slice request, but a single index and myList[:0] is an actual slice request.
The array.array class will take care of the float part:
class AverageList(array.array):
def __new__(cls, *args, **kw):
return array.array.__new__(cls, 'd')
def __init__(self, values=()):
self.extend(values)
def __repr__(self):
if not len(self): return 'Empty'
return repr(math.fsum(self)/len(self))
And some tests:
>>> s = AverageList([1,2])
>>> s
1.5
>>> s.append(9)
>>> s
4.0
>>> s.extend('lol')
Traceback (most recent call last):
File "<pyshell#117>", line 1, in <module>
s.extend('lol')
TypeError: a float is required
Actually the best answer may be: don't.
Checking all objects as they get added to the list will be computationally expensive. What do you gain by doing those checks? It seems to me that you gain very little, and I'd recommend against implementing it.
Python doesn't check types, and so trying to have a little bit of type checking for one object really doesn't make a lot of sense.
There are 7 methods of the list class that add elements to the list and would have to be checked. Here's one compact implementation:
def check_float(x):
try:
f = float(x)
except:
raise TypeError("Cannot add %s to AverageList" % str(x))
def modify_method(f, which_arg=0, takes_list=False):
def new_f(*args):
if takes_list:
map(check_float, args[which_arg + 1])
else:
check_float(args[which_arg + 1])
return f(*args)
return new_f
class AverageList(list):
def __check_float(self, x):
try:
f = float(x)
except:
raise TypeError("Cannot add %s to AverageList" % str(x))
append = modify_method(list.append)
extend = modify_method(list.extend, takes_list=True)
insert = modify_method(list.insert, 1)
__add__ = modify_method(list.__add__, takes_list=True)
__iadd__ = modify_method(list.__iadd__, takes_list=True)
__setitem__ = modify_method(list.__setitem__, 1)
__setslice__ = modify_method(list.__setslice__, 2, takes_list=True)
The general approach would be to create your own class inheriting vom list and overwriting the specific methods like append, extend etc. This will probably also include magic methods of the Python list (see this article for details: http://www.rafekettler.com/magicmethods.html#sequence).
For validation, you will need to overwrite __setitem__(self, key, value)
Here's how to create a subclass using the MutableSequence abstract base class in the collections module as its base class (not fully tested -- an exercise for the reader ;-):
import collections
class AveragedSequence(collections.MutableSequence):
def _validate(self, x):
try: return float(x)
except: raise TypeError("Can't add {} to AveragedSequence".format(x))
def average(self): return sum(self._list) / len(self._list)
def __init__(self, arg): self._list = [self._validate(v) for v in arg]
def __repr__(self): return 'AveragedSequence({!r})'.format(self._list)
def __setitem__(self, i, value): self._list[i] = self._validate(value)
def __delitem__(self, i): del self._list[i]
def insert(i, value): return self._list.insert(i, self._validate(value))
def __getitem__(self, i): return self._list[i]
def __len__(self): return len(self._list)
def __iter__(self): return iter(self._list)
def __contains__(self, item): return item in self._list
if __name__ == '__main__':
avgseq = AveragedSequence(range(10))
print avgseq
print avgseq.average()
avgseq[2] = 3
print avgseq
print avgseq.average()
# ..etc

Python OrderedSet with .index() method

Does anyone know about a fast OrderedSet implementation for python that:
remembers insertion order
has an index() method (like the one lists offer)
All implementations I found are missing the .index() method.
You can always add it in a subclass. Here is a basic implementation for the OrderedSet you linked in a comment:
class IndexOrderedSet(OrderedSet):
def index(self, elem):
if key in self.map:
return next(i for i, e in enumerate(self) if e == elem)
else:
raise KeyError("That element isn't in the set")
You mentioned you only need add, index, and in-order iteration. You can get this by using an OrderedDict as storage. As a bonus, you can subclass the collections.Set abstract class to get the other set operations frozensets support:
from itertools import count, izip
from collections import OrderedDict, Set
class IndexOrderedSet(Set):
"""An OrderedFrozenSet-like object
Allows constant time 'index'ing
But doesn't allow you to remove elements"""
def __init__(self, iterable = ()):
self.num = count()
self.dict = OrderedDict(izip(iterable, self.num))
def add(self, elem):
if elem not in self:
self.dict[elem] = next(self.num)
def index(self, elem):
return self.dict[elem]
def __contains__(self, elem):
return elem in self.dict
def __len__(self):
return len(self.dict)
def __iter__(self):
return iter(self.dict)
def __repr__(self):
return 'IndexOrderedSet({})'.format(self.dict.keys())
You can't subclass collections.MutableSet because you can't support removing elements from the set and keep the indexes correct.

Correct usage of a getter/setter for dictionary values

I'm pretty new to Python, so if there's anything here that's flat-out bad, please point it out.
I have an object with this dictionary:
traits = {'happy': 0, 'worker': 0, 'honest': 0}
The value for each trait should be an int in the range 1-10, and new traits should not be allowed to be added. I want getter/setters so I can make sure these constraints are being kept. Here's how I made the getter and setter now:
def getTrait(self, key):
if key not in self.traits.keys():
raise KeyError
return traits[key]
def setTrait(self, key, value):
if key not in self.traits.keys():
raise KeyError
value = int(value)
if value < 1 or value > 10:
raise ValueError
traits[key] = value
I read on this website about the property() method. But I don't see an easy way to make use of it for getting/setting the values inside the dictionary. Is there a better way to do this? Ideally I would like the usage of this object to be obj.traits['happy'] = 14, which would invoke my setter method and throw a ValueError since 14 is over 10.
If you are willing to use syntax like obj['happy'] = 14 then you could use __getitem__ and __setitem__:
def __getitem__(self, key):
if key not in self.traits.keys():
raise KeyError
...
return traits[key]
def __setitem__(self, key, value):
if key not in self.traits.keys():
raise KeyError
...
traits[key] = value
If you really do want obj.traits['happy'] = 14 then you could define a subclass of dict and make obj.traits an instance of this subclass.
The subclass would then override __getitem__ and __setitem__ (see below).
PS. To subclass dict, inherit from both collections.MutableMapping, and dict. Otherwise, dict.update would not call the new __setitem__.
import collections
class TraitsDict(collections.MutableMapping,dict):
def __getitem__(self,key):
return dict.__getitem__(self,key)
def __setitem__(self, key, value):
value = int(value)
if not 1 <= value <= 10:
raise ValueError('{v} not in range [1,10]'.format(v=value))
dict.__setitem__(self,key,value)
def __delitem__(self, key):
dict.__delitem__(self,key)
def __iter__(self):
return dict.__iter__(self)
def __len__(self):
return dict.__len__(self)
def __contains__(self, x):
return dict.__contains__(self,x)
class Person(object):
def __init__(self):
self.traits=TraitsDict({'happy': 0, 'worker': 0, 'honest': 0})
p=Person()
print(p.traits['happy'])
# 0
p.traits['happy']=1
print(p.traits['happy'])
# 1
p.traits['happy']=14
# ValueError: 14 not in range [1,10]
Some obvious tips come to my mind first:
Do not use .keys() method when checking for existence of some key (instead of if key not in self.traits.keys() use if key not in self.traits).
Do not explicitly throw KeyError exception - it is thrown if you try to access inexistent key.
Your code could look like this after above changes:
def getTrait(self, key):
return traits[key]
def setTrait(self, key, value):
if key not in self.traits:
raise KeyError
value = int(value)
if value < 1 or value > 10:
raise ValueError
traits[key] = value
Ps. I did no check the correctness of your code thoroughly - there may be some other issues.
and new traits should not be allowed to be added.
The natural way to do this is to use an object instead of a dictionary, and set the class' __slots__.
The value for each trait should be an int in the range 1-10... I want getter/setters so I can make sure these constraints are being kept.
The natural way to do this is to use an object instead of a dictionary, so that you can write getter/setter logic that's part of the class, and wrap them up as properties. Since all these properties will work the same way, we can do some refactoring to write code that generates a property given an attribute name.
The following is probably over-engineered:
def one_to_ten(attr):
def get(obj): return getattr(obj, attr)
def set(obj, val):
val = int(val)
if not 1 <= val <= 10: raise ValueError
setattr(obj, attr, val)
return property(get, set)
def create_traits_class(*traits):
class Traits(object):
__slots__ = ['_' + trait for trait in traits]
for trait in traits: locals()[trait] = one_to_ten('_' + trait)
def __init__(self, **kwargs):
for k, v in kwargs.items(): setattr(self, k, v)
for trait in traits: assert hasattr(self, trait), "Missing trait in init"
def __repr__(self):
return 'Traits(%s)' % ', '.join(
'%s = %s' % (trait, getattr(self, trait)) for trait in traits
)
return Traits
example_type = create_traits_class('happy', 'worker', 'honest')
example_instance = example_type(happy=3, worker=8, honest=4)
# and you can set the .traits of some other object to example_instance.

Categories

Resources