Optimizing 3d point hash function - python

I have a 3-dimensional point class whose hash function, according to the profiler, would be a good place to do some optimizing. Right now, I'm just passing a tuple of the coordinates to the built-in hash function:
def __hash__(self):
return hash((self.x, self.y, self.z))
How can I make this faster? I'm assuming constructing a tuple each time isn't good. The coordinates are real-valued.

Using a tuple instead of your own class will be much faster.
If you really want to write p.x instead of p[0] then you can make your class a subclass of tuple and have accessors. It will still be much faster than implementing your own tuple.
class Point3d(tuple):
#property
def x(self):
return self[0]
#property
def y(self):
return self[1]
#property
def z(self):
return self[2]

Try using a named tuple instead of your own class.

Related

How can I reduce redundancy by only defining the __add__ method only once?

class Ferrari:
def __init__(self,no_cars):
self.no_cars=no_cars
def __add__(self,other):
return self.no_cars+other.no_cars
class Jaquar:
def __init__(self,no_cars):
self.no_cars=no_cars
def __add__(self,other):
return self.no_cars+other.no_cars
f1=Ferrari(5)
j1=Jaquar(10)
total_cars= f1 + j1
print(total_cars)
I am trying to add two objects of different classes with operator overloading but it seems if I change the order of the operands, I will get an error that is why I have to define the __add__ method in both the classes so even if I change the order I will still get the same output but the code seems redundant and I cannot figure out any other way to do it. What can be the best alternative to it so my code is not redundant?
Define the __radd__ method, it handles the arguments being in the opposite order.
class Ferrari:
def __init__(self,no_cars):
self.no_cars=no_cars
def __add__(self,other):
return self.no_cars+other.no_cars
def __radd__(self, other):
return self + other

Python: Accessing dict with hashable object fails

I am using a hashable object as a key to a dictionary. The objects are hashable and I can store key-value-pairs in the dict, but when I create a copy of the same object (that gives me the same hash), I get a KeyError.
Here is some small example code:
class Object:
def __init__(self, x): self.x = x
def __hash__(self): return hash(self.x)
o1 = Object(1.)
o2 = Object(1.)
hash(o1) == hash(o2) # This is True
data = {}
data[o1] = 2.
data[o2] # Desired: This should output 2.
In my scenario above, how can I achieve that data[o2] also returns 2.?
You need to implement both __hash__ and __eq__:
class Object:
def __init__(self, x): self.x = x
def __hash__(self): return hash(self.x)
def __eq__(self, other): return self.x == other.x if isinstance(other, self.__class__) else NotImplemented
Per Python documentation:
if a class does not define an __eq__() method it should not define a __hash__() operation either
After finding the hash, Python's dictionary compares the keys using __eq__ and realize they're different, that's why you're not getting the correct output.
You can use the __eq__ magic method to implement a equality check on your object.
def __eq__(self, other):
if (isinstance(other, C)):
return self.x == self.x
You can learn more about magic methods from this link.
So as stated before your object need to implement __ eq__ trait (equality ==), If you want to understand why:
Sometimes hash of different object are the same, this is called collision.
Dictionary manages that by testing if the objects are equals. If they are not dictionary has to manage the collision. How they do that Is implementation details and can vary a lot. A dummy implementation would be list of tuple key value.
Under the hood, a dummy implementation may look like that :
dico[key] = [(object1, value), (object2, value)]

Subclassing and extending numpy.ndarray

I need some basic data class representations and I want to use existing numpy classes, since they already offer great functionality.
However, I'm not sure if this is the way to do it (although it works so far). So here is an example:
The Position class should act like a simple numpy.array, but it should map the attributes .x, .y and .z to the three array components. I overwrote the __new__ method which returns an ndarray with the initial array. To allow access and modification of the array, I defined properties along with setters for each one.
import numpy as np
class Position(np.ndarray):
"""Represents a point in a 3D space
Adds setters and getters for x, y and z to the ndarray.
"""
def __new__(cls, input_array=(np.nan, np.nan, np.nan)):
obj = np.asarray(input_array).view(cls)
return obj
#property
def x(self):
return self[0]
#x.setter
def x(self, value):
self[0] = value
#property
def y(self):
return self[1]
#y.setter
def y(self, value):
self[1] = value
#property
def z(self):
return self[2]
#z.setter
def z(self, value):
self[2] = value
This seems however a bit too much code for such a basic logic and I'm wondering if I do it the "correct" way. I also need bunch of other classes like Direction which will have quite a few other functionalities (auto-norm on change etc.) and before I start integrating numpy, I thought I ask you…
I would argue ndarray is the wrong choice here, you probably want a simple namedtuple.
>>> import collections
>>> Position = collections.namedtuple('Positions', 'x y z')
>>> p = Position(1, 2, 3)
>>> p
Positions(x=1, y=2, z=3)
You could get the unpacking like so
>>> x, y, z = p
>>> x, y, z
(1, 2, 3)
>>>

Properly Implementing Python Star Operator for a Custom Class

I have a Python class called Point, that is basically a holder for an x and y value with added functionality for finding distance, angle, and such with another Point.
For passing a point to some other function that may require the x and y to be separate, I would like to be able to use the * operator to unpack my Point to just the separate x, y values.
I have found that this is possible if I override __getitem__ and raise a StopIterationException for any index beyond 1, with x corresponding to 0 and y to 1.
However it doesn't seem proper to raise a StopIteration when a ValueError/KeyError would be more appropriate for values beyond 1.
Does anyone know of the correct way to implement the * operator for a custom class? Preferably, a way that does not raise StopIteration through __getitem__?
You can implement the same by overriding the __iter__ magic method, like this
class Point(object):
def __init__(self, x, y):
self.x, self.y = x, y
def __iter__(self):
return (self.__dict__[item] for item in sorted(self.__dict__))
def printer(x, y):
print x, y
printer(*Point(2, 3))
Output
2 3
Here's another way to do it that uses __dict__ but gives you precise control over the order without having to perform a sort on the keys for every access:
def __iter__(self): return (self.__dict__[item] for item in 'xy')
Of course, you could stash a sorted tuple, list or string of keys somewhere, but I think using a literal makes sense here.
And while I'm at it, here's one way to do the setter & getter methods.
def __getitem__(self, key): return (self.x, self.y)[key]
def __setitem__(self, key, val): setattr(self, 'xy'[key], val)

python class Vector, change from 3dimension to ndimension

I made this class that computes some operations for 3d vectors, is there anyway I can change the code so that it computes the same operations but for vectors of any dimension n?
import sys
class Vector:
def __init__(self,x,y,z):
self.x= x
self.y= y
self.z= z
def __repr__(self):
return "(%f,%f,%f)"%(self.x,self.y,self.z)
def __add__(self,other):
return (self.x+other.x,self.y+other.y,self.z+other.z)
def __sub__(self,other):
return (self.x-other.x,self.y-other.y,self.z-other.z)
def __norm__(self):
return (self.x**2+self.y**2+self.z**2)**0.5
def escalar(self,other):
return (self.x*other.x+self.y*other.y+self.z*other.z)
def __mod__(self,other):
return (self.x%other.x,self.y%other.y,self.z%other.z)
def __neg__(self):
return (-self.x,-self.y,-self.z)
As an example, for a n dimensional vector, something like
class Vector:
def __init__(self, components):
self.components = components # components should be a list
def __add__(self, other):
assert len(other.components) == len(self.components)
added_components = []
for i in range(len(self.components)):
added_components.append(self.components[i] + other.components[i])
return Vector(added_components)
def dimensions(self):
return len(self.components)
would be possible. Note that the __add__ override returns a new Vector instance, not a tuple as in your case. Then adapt your other methods likewise.
There are more 'clever' ways of adding elements from two lists, into a third. You should really not do it this way if performance is an issue though (or in any other case but an exercise, IMO). Look into numpy.
Use a list to store the coefficients rather than explicit variables. For negating, adding, subtracting etc. you just iterate over the lists.
In terms of initialisation, you need to use *args for the input. Have a look at this post for an explanation of how it works: https://stackoverflow.com/a/3394898/1178052

Categories

Resources