Getting the key of a dictionary from its hash in python? - python

I have a graph class that uses an adjacency list to keep track of vertices and edges, as well as a vertex class with a predefined hash function that looks like the following:
class Vertex():
def __init__(self, name):
self.name = name
def __hash__(self):
return hash(self.name)
Essentially, in my Graph class, I have a method called addVertex that takes in a vertex object and checks to see if it exists in the Graph already before adding it. If it does already exist, I want to return the object that is already in the graph, not the parameter I passed into the method. How would I go about implementing this?
class Graph():
def __init__(self):
self.adjList = {}
def addVertex(vertex):
try:
self.adjList[vertex]
return ???????????
except:
self.adjList[vertex] = {}
return vertex

Just use a membership test:
if vertex in self.adjList:
The dict.__contains__ implementation will then use the __hash__ special method automatically.
Note that your Vertex class must also implement a __eq__ equality method:
class Vertex():
def __init__(self, name):
self.name = name
def __hash__(self):
return hash(self.name)
def __eq__(self, other):
if not isinstance(other, type(self)):
return NotImplemented
return self.name == other.name

Related

Monkey patching? Proxy objects? Getting rid of a factory method and recycling a generic object

Basically, I have a generic class with a lot of methods. Before accessing those methods, I have to check its 'type' field, for example:
Note: generic_class_t is from a 3rd-party library I cannot control or re-design. I can only wrap but I want to be performant.
Here's the problem:
class generic_class_t:
def __init__(self):
self.type = ...
def method1(self):
pass
def method2(self):
pass
attr1 = property(lambda self: ...)
attr1 = property(lambda self: ...)
User code working with the generic class, would always have to use something like this:
if cls.type == 1:
cls.method1()
cls.attr1
elif cls.type == 2:
cls.method2()
cls.attr2
...
What I want to do, is wrap that class in specialized classes:
class cls1:
"""Wrapper for type=1"""
def __init__(self, cls):
self.obj = cls
def method1():
self.obj.method1()
attr = property(lambda self: cls.attr1)
# There are no method2, etc.
class cls2:
"""Wrapper for type=1"""
def __init__(self, cls):
self.obj = cls
def method2():
self.obj.method2()
attr = property(lambda self: cls.attr2)
# There are no method2, etc.
Now of course, I would have to have a factory method:
o1 = wrap(cls)
or:
o2 = wrap(cls)
def wrap(cls):
if cls.type == 1:
return cls1(cls)
elif cls.type == 2:
return cls2(cls)
The problem with that is memory and performance of the factory.
For each generic class instance, another specialized object would have to be constructed.
My question is:
Is there is a way to directly / quickly patch the existing generic class instance and super-impose the specialized cls1 and cls2, etc.?
I prefer not to create a new object at all.
I know a proxy object method can come in handy, but again, we have the proxy object itself.
Can I patch the __dict__ / swap the __dict__ , etc. and do something fast to de-generalize the generic class and have specialized classes hijack its method?
I hope this is clear.
I appreciate any advise.
Thanks!
I think you're underestimating the power of dictionaries. For example, let's say your generic interface has a method do and a propery attr. Your factory does not need a giant if statement to choose which implementation it needs:
class Wrapper:
def __init__(self, obj):
self.obj = obj
class Type1(Wrapper):
def do(self):
return self.obj.method1()
#property
def attr(self):
return self.obj.attr1
class Type2(Wrapper):
def do(self):
return self.obj.method2()
#property
def attr(self):
return self.obj.attr2
type_map = {
1: Type1,
2: Type2
}
def type_factory(obj):
return type_map(obj.type)(obj)
This way allows you to perform arbitrary operations on the underlying object. For example, with a couple of tweaks, you can have a version that does something like this:
class TypeCombined(Wrapper):
def do(self):
return self.obj.method1() + self.obj.method2()
#property
def attr(self):
return self.obj.attr1 + self.obj.attr2
def type_factory(obj, type=None):
return type_map[obj.type if type is None else type](obj)
If you need something simpler because each type simply corresponds to a selection of methods and attributes, you can generate the classes with something like a metaclass, or more simply, use a configurable base class:
type_map = {
1: ('method1', 'attr1'),
2: ('method2', 'attr2')
}
class Wrapper:
def __init__(self, obj):
self.obj = obj
self.method_name, self.attr_name = type_map[obj.type]
def do(self):
return getattr(self.obj, self.method_name)()
#property
def attr(self):
return getattr(self.obj, self.attr_name)

Different results for `isinstance` with `collecitons.abc.Collection` and `collections.abc.Mapping`

I am having trouble understanding how isinstance is meant to work with the Abstract Base Classes from collections.abc. My class that implements all specified methods for collections.abc.Mapping does not make isinstance return True for my class, but does make isinstance return True for collections.abc.Collection. My class is not registered as a subclass with either ABC.
Running the following code (with Python 3.7, but I'm not sure if that matters):
class dictroproxy:
def __init__(self, d):
self._d = d
def __getitem__(self, key):
return self._d.__getitem__(key)
def __contains__(self, key):
return self._d.__contains__(key)
def __len__(self):
return self._d.__len__()
def __iter__(self):
return self._d.__iter__()
def __eq__(self, other):
if isinstance(other, dictroproxy):
other = other._d
return self._d.__eq__(other)
def __ne__(self, other):
return not self.__eq__(other)
def get(self, key, default=None):
return self._d.get(key, default)
def keys(self):
return self._d.keys()
def values(self):
return self._d.values()
def items(self):
return self._d.items()
if __name__ == "__main__":
from collections.abc import Collection, Mapping
dd = dictroproxy({"a": 1, "b": 2, "c": 3})
print("Is collection?", isinstance(dd, Collection))
print("Is mapping?", isinstance(dd, Mapping))
Gives me the following output:
Is collection? True
Is mapping? False
Am I missing something in my implementation, or do Collection and Mapping behave differently?
From my investigation into this I found that some ABCs implement a __subclasshook__ method to determine if a class seems like a subclass of the ABC. At least as of Python 3.9, some ABCs implement __subclasshook__ and some do not. Collection does and Mapping does not.
I haven't seen which ABCs this works for documented anywhere, so it may be that the only way to know is to try or review the source code.

What is the difference between readable property method and a callable function that is just returns the data as a property can?

I have a property that returns list of names with "ash" in it
class BaseClass(object):
def __init__(self):
self.filter_key = ""
self.name = ""
def filter_names(self, filter_key):
self.filter_key = filter_key
#property
def student_names(self):
return self.names
def callable_function_names(self):
return names
and then student class that inherits BaseClass
class StudentClass(BaseClass):
#property
def student_names(self):
names = super(StudentClass, self).student_names
return [name for name in names if self.filter_students in name]
#property
def filter_key(self):
"""Gets """
return self.filter_key
#slot_key.setter
def filter_key(self, key):
"""Sets name filter"""
self.filter_names(key)
# or by doing :
def callable_function_names(self):
names = super(StudentClass, self).callable_function_names()
return [name for name in names if self.filter_students in name]
So if I create obj of the student class.
studentclsObj = StudentClass()
studentclsObj.filter_key = "ash"
print studentclsObj.student_names
print studentclsObj.callable_function_names()
I can achieve the same result with both above prints, is there any difference and what is preferred and right way to do ?
One use case of properties is not breaking API. This is one of main strengths of python IMO. You can take a function, make transform it in a callable object, add new functionality without breaking old code, now the property
I see three main uses of properties over attributes,
Read only attributes
Is easy to create read only attributes with properties. They are non verbose, self documenting and simple
class Foo:
def __init__(self, bar):
self._bar = bar
#property
def bar(self):
return self._bar
Validation on writable properties
class Foo:
def __init__(self, bar):
self._bar = bar
#property
def bar(self):
return self._bar
#bar.setter
def bar(self, val):
if valid(val):
self._bar = val
This is a kind of defensive programming
Keep API compatibility
Imagine that you have a class for a bank account, with
a balance property
class BankAccount:
def __init__(self):
self.balance = 0
You have this code and it works fine. But know your client
says, I need you to log every balance lookup. You can replace
the attribute by a property without breaking old code
class BankAccount:
def __init__(self):
self._balance = 0
#property
def balance(self):
self.log_balance_read()
return self._balance
There is no difference between a property and a method which return the same value. Go for the simpler, use method for actions and state changes and attributes for real attributes, if you need to add logic to attribute lookup, python will let you do it

How to return an instance's attribute by default

In my code, I have a data store with multiple variables set to instances of a class similar to that below. (The reason is that this Interval class has lots of operator overriding functions).
class Interval(object):
def __init__(self, value):
self.value = value
data_store.a = Interval(1)
I want any references to data_store.a to return self.value rather than the Interval instance. Is this possible?
As an alternative to Malik's answer, you could make a a #property, the Pythonic equivalent of get and set for managing access to internal attributes:
class DataStore(object):
def __init__(self):
self.a = Interval(1)
#property
def a(self):
return self._a.value
#a.setter
def a(self, value):
self._a = value
Here _a is a private-by-convention attribute that stores the Interval instance. This works as you want it:
>>> store = DataStore()
>>> store.a
1
You need to extend your data store whose attribute is an interval object:
class DataStore(object):
def __init__(self):
self.a = Interval(1)
def __getattribute__(self, attr):
if attr == 'a':
return object.__getattribute__(self, 'a').value
if attr != 'a':
return object.__getattribute__(self, attr)

Implementing Python persistent properties

In a class, I want to define N persistent properties. I can implement them as follow:
#property
def prop1(self):
return self.__prop1
#prop1.setter
def prop1(self, value):
self.__prop1 = value
persistenceManagement()
#property
def prop2(self):
return self.__prop2
#prop2.setter
def prop2(self, value):
self.__prop2 = value
persistenceManagement()
[...]
#property
def propN(self):
return self.__propN
#propN.setter
def propN(self, value):
self.__propN = value
persistenceManagement()
Of course, the only different thing between these blocks is the property name (prop1, prop2, ..., propN). persistenceManagement() is a function that has to be called when the value of one of these property changes.
Since these blocks of code are identical except for a single information (i.e., the property name), I suppose there must be some way to replace each of these blocks by single lines declaring the existence of a persistent property with a given name. Something like
def someMagicalPatternFunction(...):
[...]
someMagicalPatternFunction("prop1")
someMagicalPatternFunction("prop2")
[...]
someMagicalPatternFunction("propN")
...or maybe some decorating trick that I cannot see at the moment. Is someone has an idea how this could be done?
Properties are just descriptor classes and you can create your own and use them:
class MyDescriptor(object):
def __init__(self, name, func):
self.func = func
self.attr_name = '__' + name
def __get__(self, instance, owner):
return getattr(self, self.attr_name)
def __set__(self, instance, value):
setattr(self, self.attr_name, value)
self.func(self.attr_name)
def postprocess(attr_name):
print 'postprocess called after setting', attr_name
class Example(object):
prop1 = MyDescriptor('prop1', postprocess)
prop2 = MyDescriptor('prop2', postprocess)
obj = Example()
obj.prop1 = 'answer' # prints 'postprocess called after setting __prop1'
obj.prop2 = 42 # prints 'postprocess called after setting __prop2'
Optionally you can make it a little easier to use with something like this:
def my_property(name, postprocess=postprocess):
return MyDescriptor(name, postprocess)
class Example(object):
prop1 = my_property('prop1')
prop2 = my_property('prop2')
If you like the decorator # syntax, you could do it this way (which also alleviates having to type the name of the property twice) -- however the dummy functions it requires seem a little weird...
def my_property(method):
name = method.__name__
return MyDescriptor(name, postprocess)
class Example(object):
#my_property
def prop1(self): pass
#my_property
def prop2(self): pass
The property class (yes it's a class) is just one possible implementation of the descriptor protocol (which is fully documented here: http://docs.python.org/2/howto/descriptor.html). Just write your own custom descriptor and you'll be done.

Categories

Resources