Python: Inheriting from Built-In Types - python

I have a question concerning subtypes of built-in types and their constructors. I want a class to inherit both from tuple and from a custom class.
Let me give you the concrete example. I work a lot with graphs, meaning nodes connected with edges. I am starting to do some work on my own graph framework.
There is a class Edge, which has its own attributes and methods. It should also inherit from a class GraphElement. (A GraphElement is every object that has no meaning outside the context of a specific graph.) But at the most basic level, an edge is just a tuple containing two nodes. It would be nice syntactic sugar if you could do the following:
edge = graph.create_edge("Spam","Eggs")
(u, v) = edge
So (u,v) would contain "Spam" and "Eggs". It would also support iteration like
for node in edge: ...
I hope you see why I would want to subtype tuple (or other basic types like set).
So here is my Edge class and its init:
class Edge(GraphElement, tuple):
def __init__(self, graph, (source, target)):
GraphElement.__init__(self, graph)
tuple.__init__((source, target))
When i call
Edge(aGraph, (source, target))
I get a TypeError: tuple() takes at most 1 argument (2 given). What am I doing wrong?

Since tuples are immutable, you need to override the __new__ method as well. See http://www.python.org/download/releases/2.2.3/descrintro/#__new__
class GraphElement:
def __init__(self, graph):
pass
class Edge(GraphElement, tuple):
def __new__(cls, graph, (source, target)):
return tuple.__new__(cls, (source, target))
def __init__(self, graph, (source, target)):
GraphElement.__init__(self, graph)

For what you need, I would avoid multiple inheritance and would implement an iterator using generator:
class GraphElement:
def __init__(self, graph):
pass
class Edge(GraphElement):
def __init__(self, graph, (source, target)):
GraphElement.__init__(self, graph)
self.source = source
self.target = target
def __iter__(self):
yield self.source
yield self.target
In this case both usages work just fine:
e = Edge(None, ("Spam","Eggs"))
(s, t) = e
print s, t
for p in e:
print p

You need to override __new__ -- currently tuple.__new__ is getting called (as you don't override it) with all the arguments you're passing to Edge.

Related

Python: overlapping (non exclusive) inheritance to have methods available based on instance parameters

I want to have certain attributes & methods only available in a class instance, if the parameters meet certain conditions. The different cases are not exclusive. I already have a working solution (incl. the suggestion from ShadowRanger):
class polygon():
def __new__(cls, vert, axis=None):
triangle = vert == 3
solidrev = axis is not None
if triangle and not solidrev:
return super().__new__(_triangle)
elif solidrev and not triangle:
return super().__new__(_solid)
elif solidrev and triangle:
return super().__new__(_triangle_solid)
else:
return super().__new__(cls)
def __init__(self, vert, axis=None):
self.polygon_attribute = 1
def polygon_method(self):
print('polygon')
class _triangle(polygon):
def __init__(self, vert, axis=None):
super().__init__(vert, axis)
self.triangle_attribute = 2
def triangle_method(self):
print('triangle')
class _solid(polygon):
def __init__(self, vert, axis):
super().__init__(vert, axis)
self.solid_attribute = 3
def solid_method(self):
print('solid of revolution')
class _triangle_solid(_triangle, _solid):
def __init__(self, vert, axis):
super().__init__(vert, axis)
Availability of attributes & methods depends on the instance parameters:
The attributes & methods from the base class should always be available.
If the first parameter equals 3, the attributes & methods from subclass _triangle should be available.
If the second parameter is defined, the attributes & methods from subclass _solid should be available.
All combinations:
P = polygon(2)
P = polygon(2,axis=0)
P = polygon(3)
P = polygon(3,axis=0)
Is there a more elegant way to do this? In the ideal case, I want to get rid of the _triangle_solid class. Also, I don't get why I need to define the default argument for axis in some cases but not all of them.
Full project: https://github.com/gerritnowald/polygon
This is an example of trying to overdo use of inheritance. Inheritance makes logical sense when there is an "is a" relationship between the child and its parent class. A triangle is a polygon, so no problems there; it's a reasonable inheritance chain. A solid of revolution, while possibly built from a polygon, is not a polygon, and trying to wedge that into the inheritance hierarchy is creating problems. It's even worse because a solid of revolution may not even be defined in terms of a polygon at all.
I'd strongly recommend defining your solids of revolution with an attribute representing whatever is being revolved to produce it, not as a subclass of that revolved figure.
All that said, polygon itself should not be responsible for knowing all of its subclasses, and if it is, it should still be the parent of a triangle. Your design as currently rendered has a polygon class that nothing is an instance of; the __new__ is returning something that is not a polygon, and that's confusing as heck. You can write the hierarchy in a safer, if still not idiomatic OO way, by doing:
# Tweaked name; it's not just the base anymore; using PEP8 class name capitalization rules
class Polygon:
def __new__(cls, vert, *args, **kwargs): # Accept and ignore the arguments we don't care
# about, __init__ will handle them
if vert == 3:
return super().__new__(Triangle)
else:
return super().__new__(cls)
def __init__(self, vert, axis):
self.polygon_attribute = 1
def polygon_method(self):
print('polygon')
class Triangle(Polygon):
def __init__(self, vert, axis):
super().__init__(vert, axis)
self.triangle_attribute = 2
def triangle_method(self):
print('triangle')
t = Polygon(3, None)
p = Polygon(4, None)
print(type(t), type(p))
# Indicates t is a Triangle, p is a Polygon
Try it online!

Python classes, mappings, pprint, KeysView vs. dict_keys; to keys() or not to keys()?

I have a problem with my base class. I started writing it after finding an answer on this site about more informative __repr__() methods. I added to it after finding a different answer on this site about using pprint() with my own classes. I tinkered with it a little more after finding a third answer on this site about making my classes unpackable with a ** operator.
I modified it again after seeing in yet another answer on this site that there was a distinction between merely giving it __getitem__(), __iter__(), and __len__() methods on the one hand, and actually making it a fully-qualified mapping by subclassing collections.abc.Mapping on the other. Further, I saw that doing so would remove the need for writing my own keys() method, as the Mapping would take care of that.
So I got rid of keys(), and a class method broke.
The problem
I have a method that iterates through my class' keys and values to produce one big string formatted as I'd like it. That class looks like this.
class MyObj():
def __init__(self, foo, bar):
self.foo = foo
self.bar = bar
def the_problem_method(self):
"""Method I'm getting divergent output for."""
longest = len(max((key for key in self.keys()), key=len))
key_width = longest + TAB_WIDTH - longest % TAB_WIDTH
return '\n'.join((f'{key:<{key_width}}{value}' for key, value in self))
Yes, that doesn't have the base class in it, but the MWE later on will account for that. The nut of it is that (key for key in self.keys()) part. When I have a keys() method written, I get the output I want.
def keys(self):
"""Get object attribute names."""
return self.__dict__.keys()
When I remove that to go with the keys() method supplied by collections.abc.Mapping, I get no space between key and value
The question
I can get the output I want by restoring the keys() method (and maybe adding values() and items() while I'm at it), but is that the best approach? Would it be better to go with the Mapping one and modify my class method to suit it? If so, how? Should I leave Mapping well enough alone until I know I need it?
This is my base class to be copied all aver creation and subclassed out the wazoo. I want to Get. It. Right.
There are already several considerations I can think of and many more of which I am wholly ignorant.
I use Python 3.9 and greater. I'll abandon 3.9 when conda does.
I want to keep my more-informative __repr__() methods.
I want pprint() to work, via the _dispatch table method with _format_dict_items().
I want to allow for duck typing my classes reliably.
I have not yet used type hinting, but I want to allow for using best practices there if I start.
Everything else I know nothing about.
The MWE
This has my problem class at the top and output stuff at the bottom. There are two series of classes building upon the previous ones.
The first are ever-more-inclusive base classes, and it is here that the difference between the instance with the keys() method and that without is shown. the class, BaseMap, subclasses the Mapping and has the __getitem__(), __iter__(), and __len__() methods. The next class up the chain, BaseMapKeys, subclasses that and adds the keys() method.
The second group, MapObj and MapKeysObj, are subclasses of the problem class that also subclass those different base classes respectively.
OK, maybe the WE isn't so M, but lots of things got me to this point and I don't want to neglect any.
import collections.abc
from pprint import pprint, PrettyPrinter
TAB_WIDTH = 3
class MyObj():
def __init__(self, foo, bar):
self.foo = foo
self.bar = bar
def the_problem_method(self):
"""Method I'm getting divergent output for."""
longest = len(max((key for key in self.keys()), key=len))
key_width = longest + TAB_WIDTH - longest % TAB_WIDTH
return '\n'.join((f'{key:<{key_width}}{value}' for key, value in self))
class Base(object):
"""Base class with more informative __repr__."""
def __repr__(self):
"""Object representation."""
params = (f'{key}={repr(value)}'
for key, value in self.__dict__.items())
return f'{repr(self.__class__)}({", ".join(params)})'
class BaseMap(Base, collections.abc.Mapping):
"""Enable class to be pprint-able, unpacked with **."""
def __getitem__(self, attr):
"""Get object attribute values."""
return getattr(self.__dict__, attr)
def __iter__(self):
"""Make object iterable."""
for attr in self.__dict__.keys():
yield attr, getattr(self, attr)
def __len__(self):
"""Get length of object."""
return len(self.__dict__)
class BaseMapKeys(BaseMap):
"""Overwrite KeysView output with what I thought it would be."""
def keys(self):
"""Get object attribute names."""
return self.__dict__.keys()
class MapObj(BaseMap, MyObj):
"""Problem class with collections.abc.Mapping."""
def __init__(self, foo, bar):
super().__init__(foo, bar)
class MapKeysObj(BaseMapKeys, MyObj):
"""Problem class with collections.abc.Mapping and keys method."""
def __init__(self, foo, bar):
super().__init__(foo, bar)
if isinstance(getattr(PrettyPrinter, '_dispatch'), dict):
# assume the dispatch table method still works
def pprint_basemap(printer, object, stream, indent, allowance, context,
level):
"""Implement pprint for subclasses of BaseMap class."""
write = stream.write
write(f'{object.__class__}(\n {indent * " "}')
printer._format_dict_items(object, stream, indent, allowance + 1,
context, level)
write(f'\n{indent * " "})')
map_classes = [MapObj, MapKeysObj]
for map_class in map_classes:
PrettyPrinter._dispatch[map_class.__repr__] = pprint_basemap
def print_stuff(map_obj):
print('pprint object:')
pprint(map_obj)
print()
print('print keys():')
print(map_obj.keys())
print()
print('print list(keys()):')
print(list(map_obj.keys()))
print()
print('print the problem method:')
print(map_obj.the_problem_method())
print('\n\n')
params = ['This is a really long line to force new line in pprint output', 2]
baz = MapObj(*params)
print_stuff(baz)
scoggs = MapKeysObj(*params)
print_stuff(scoggs)

Implementing Graph for Bayes Net in FSharp

I'm trying to translate a graph formulation from Python to F#
The python "Node" class:
class Node:
""" A Node is the basic element of a graph. In its most basic form a graph is just a list of nodes. A Node is a really just a list of neighbors.
"""
def __init__(self, id, index=-1, name="anonymous"):
# This defines a list of edges to other nodes in the graph.
self.neighbors = set()
self.visited = False
self.id = id
# The index of this node within the list of nodes in the overall graph.
self.index = index
# Optional name, most usefull for debugging purposes.
self.name = name
def __lt__(self, other):
# Defines a < operator for this class, which allows for easily sorting a list of nodes.
return self.index < other.index
def __hash__(self):
return hash(self.id)
def __eq__(self, right):
return self.id == right.id
def add_neighbor(self, node):
""" Make node a neighbor if it is not alreadly. This is a hack, we should be allowing self to be a neighbor of self in some graphs. This should be enforced at the level of a graph, because that is where the type of the graph would disallow it.
"""
if (not node in self.neighbors) and (not self == node):
self.neighbors.add(node)
def remove_neighbor(self, node):
# Remove the node from the list of neighbors, effectively deleting that edge from
# the graph.
self.neighbors.remove(node)
def is_neighbor(self, node):
# Check if node is a member of neighbors.
return node in self.neighbors
My F# class so far:
type Node<'T>= string*'T
type Edge<'T,'G> = Node<'T>*Node<'T>*'G
type Graph =
| Undirected of seq(Node*list Edge)
| Directed of seq(Node*list Edge *list Edge)
Yes, this does have to do with immutability. F#'s Set is an immutable collection, it is based on a binary tree which supports Add, Remove and lookup in O(log n) time.
However, because the collection is immutable, the add operation returns a new Set.
let originalSet = set [1; 2; 7]
let newSet = originalSet.Add(5)
The most purely functional solution is probably to reconstruct your problem to remove the mutability entirely. This approach would probably see you reconstruct your node class as an immutable data container (with no methods) and define the functions that act on that data container in a separate module.
module Nodes =
/// creates a new node from an old node with a supplied neighbour node added.
let addNeighbour neighbourNode node =
Node <| Set.add neighbourNode (node.Neighbours)
//Note: you'll need to replace the backwards pipe with brackets for pre-F# 4.0
See the immutable collections in the FSharp Core library such as List, Map, etc. for more examples.
If you prefer the mutable approach, you could just make your neighbours mutable so that it can be updated when the map changes or just use a mutable collection such as a System.Collections.Generic.HashSet<'T>.
When it comes to the hashcode, Set<'T> actually doesn't make use of that. It requires that objects that can be contained within it implement the IComparable interface. This is used to generate the ordering required for the binary tree. It looks like your object already has a concept of ordering built-in which would be appropriate to provide this behaviour.

How to implement a list of references in python?

I'm trying to model a collection of objects in python (2). The collection should make a certain attribute (an integer, float or any immutable object) of the objects available via a list interface.
(1)
>>> print (collection.attrs)
[1, 5, 3]
>>> collection.attrs = [4, 2, 3]
>>> print (object0.attr == 4)
True
I especially expect this list interface in the collection to allow for reassigning a single object's attribute, e.g.
(2)
>>> collection.attrs[2] = 8
>>> print (object2.attr == 8)
True
I am sure this is a quite frequently occurring situation, unfortunately I was not able to find a satisfying answer on how to implement it on stackoverflow / google etc.
Behind the scenes, I expect the object.attr to be implemented as a mutable object. Somehow I also expect the collection to hold a "list of references" to the object.attr and not the respectively referenced (immutable) values themselves.
I ask for your suggestion how to solve this in an elegant and flexible way.
A possible implementation that allows for (1) but not for (2) is
class Component(object):
"""One of many components."""
def __init__(self, attr):
self.attr = attr
class System(object):
"""One System object contains and manages many Component instances.
System is the main interface to adjusting the components.
"""
def __init__(self, attr_list):
self._components = []
for attr in attr_list:
new = Component(attr)
self._components.append(new)
#property
def attrs(self):
# !!! this breaks (2):
return [component.attr for component in self._components]
#attrs.setter
def attrs(self, new_attrs):
for component, new_attr in zip(self._components, new_attrs):
component.attr = new_attr
The !!! line breaks (2) because we create a new list whose entries are references to the values of all Component.attr and not references to the attributes themselves.
Thanks for your input.
TheXMA
Just add another proxy inbetween:
class _ListProxy:
def __init__(self, system):
self._system = system
def __getitem__(self, index):
return self._system._components[index].attr
def __setitem__(self, index, value):
self._system._components[index].attr = value
class System:
...
#property
def attrs(self):
return _ListProxy(self)
You can make the proxy fancier by implementing all the other list methods, but this is enough for your use-case.
#filmor thanks a lot for your answer, this solves the problem perfectly! I made it a bit more general:
class _ListProxy(object):
"""Is a list of object attributes. Accessing _ListProxy entries
evaluates the object attributes each time it is accessed,
i.e. this list "proxies" the object attributes.
"""
def __init__(self, list_of_objects, attr_name):
"""Provide a list of object instances and a name of a commonly
shared attribute that should be proxied by this _ListProxy
instance.
"""
self._list_of_objects = list_of_objects
self._attr_name = attr_name
def __getitem__(self, index):
return getattr(self._list_of_objects[index], self._attr_name)
def __setitem__(self, index, value):
setattr(self._list_of_objects[index], self._attr_name, value)
def __repr__(self):
return repr(list(self))
def __len__(self):
return len(self._list_of_objects)
Are there any important list methods missing?
And what if I want some of the components (objects) to be garbage collected?
Do I need to use something like a WeakList to prevent memory leakage?

Choosing variables to create when initializing classes

I have a class which would be a container for a number of variables of different types. The collection is finite and not very large so I didn't use a dictionary. Is there a way to automate, or shorten the creation of variables based on whether or not they are requested (specified as True/False) in the constructor?
Here is what I have for example:
class test:
def __init__(self,a=False,b=False,c=False):
if a: self.a = {}
if b: self.b = 34
if c: self.c = "generic string"
For any of a,b,c that are true in the constructor they will be created in the object.
I have a collection of standard variables (a,b,c,d..) that some objects will have and some objects won't. The number of combinations is too large to create separate classes, but the number of variables isn't enough to have a dictionary for them in each class.
Is there any way in python to do something like this:
class test:
def __init__(self,*args):
default_values = {a:{},b:34,c:"generic string"}
for item in args:
if item: self.arg = default_values[arg]
Maybe there is a whole other way to do this?
EDIT:
To clarify this a class which represents different type of bounding boxes on a 2D surface. Depending on the function of the box it can have any of frame coordinates, internal cross coordinates, id, population statistics (attached to that box), and some other cached values for easy calculation.
I don't want to have each object as a dictionary because there are methods attached to it which allow it to export and modify its internal data and interact with other objects of the same type (similar to how strings interact with + - .join, etc.). I also don't want to have a dictionary inside each object because the call to that variable is inelegant:
print foo.info["a"]
versus
print foo.a
Thanks to ballsdotball I've come up with a solution:
class test:
def __init__(self, a=False, b=False,c =False):
default_values = {"a":{},"b":34,"c":"generic string"}
for k, v in default_values.iteritems():
if eval(k): setattr(self,k,v)
Maybe something like:
def __init__(self,*args,**kwargs):
default_values = {a:{},b:34,c:"generic string"}
for k,v in kwargs.iteritems():
try:
if not v is False:
setattr(self,k,default_values[k])
except Exception, e:
print "Argument has no default value.",e
But to be honest I would just put the default values in with the init arguments instead of having to test for them like that.
*Edited a couple times for syntax.
You can subclass dict (if you aren't using positional arguments):
class Test(dict):
def your_method(self):
return self['foo'] * 4
You can also override __getattr__ and __setattr__ if the self['foo'] syntax bothers you:
class Test(dict):
def __getattr__(self, key):
return dict.__getattr__(self, key)
def __setattr__(self, key, value):
return dict.__getattr__(self, key, value)
def your_method(self):
return self.foo * 4

Categories

Resources