I am trying to create nan value for integer. the design i am thinking about is the following.
I need to create and isnan lambda function in the class definition header but it returns an error
import numpy as np
class Integer(object):
type = int
nan = -1
isnan = lambda val: val==-1
def __new__(cls, value):
return cls.type(value)
class Float(object):
type = float
isnan = lambda val: np.isnan(val)
def __new__(cls, value):
return cls.type(value)
but it returns an error
>> Integer.isnan(1)
>> Traceback (most recent call last):
>> File "<stdin>", line 1, in <module>
>>TypeError: unbound method <lambda>() must be called with Integer instance as first argument (got int instance instead)
The issue is that your isnan functions are being treated as instance methods by Python. Even though you're using them "unbound", Python 2 still does a type check to ensure that the first argument to a method is an instance of the class (e.g. self). In Python 3, unbound methods have been discarded, and your code would work just fine.
You can work around this by passing the lambda function through staticmethod:
isnan = staticmethod(lambda val: val == -1)
Or you could use a regular function definition, with staticmethod as a decorator:
#staticmethod
def isnan(value):
return val == -1
Note that if you made your classes inherit from their type value, you could call isnan as an actual instance method:
class Integer(int):
# no __new__ needed
def isnan(self):
return self == -1
This would let you call Integer(5).isnan(), rather than what you do in your current code.
One final suggestion: Don't use type as a variable name, since it is already the name of the built-in type class. It's not as bad using it as a class attribute as it would be as a variable (where it would shadow the built-in), but it can still be confusing.
You need to make it a static method. Two choices:
class Integer(object):
type = int
nan = -1
#staticmethod
def isnan(v):
return v == -1
isnan_lambda = staticmethod(lambda v: v == -1)
def __new__(cls, value):
return cls.type(value)
print Integer.isnan(5)
print Integer.isnan(5)
Related
Let's say I have an Entity class:
class Entity(dict):
pass
def save(self):
...
I can wrap a dict object with Entity(dict_obj)
But is it possible to create a class that can wrap any type of objects, eg. int, list etc.
PS I have come up the following work around, it doesn't work on the more complex objects, but seems to work with basic ones, completely unsure if there are any gotchas, might get penalised with efficiency by creating the class every time, please let me know:
class EntityMixin(object):
def save(self):
...
def get_entity(obj):
class Entity(obj.__class__, EntityMixin):
pass
return Entity(obj)
Usage:
>>> a = get_entity(1)
>>> a + 1
2
>>> b = get_entity('b')
>>> b.upper()
'B'
>>> c = get_entity([1,2])
>>> len(c)
2
>>> d = get_entity({'a':1})
>>> d['a']
1
>>> d = get_entity(map(lambda x : x, [1,2]))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/jlin/projects/django-rest-framework-queryset/rest_framework_queryset/entity.py", line 11, in get_entity
return Entity(obj)
TypeError: map() must have at least two arguments.
Improve efficiency:
EntityClsCache = {}
class EntityMixin(object):
def save(self):
...
def _get_entity_cls(obj):
class Entity(obj.__class__, EntityMixin):
pass
return Entity
def get_entity(obj)
cls = None
try:
cls = EntityClsCache[obj.__class__]
except AttributeError:
cls = _get_entity_cls(obj)
EntityClsCache[obj.__class__] = cls
return cls(obj)
The solution you propose looks elegant, but it lacks caching, as in, you'll construct a unique class every time get_entity() is called, even if types are all the same.
Python has metaclasses, which act as class factories. Given that metaclass' methods override these of class, not the instance, we can implement class caching:
class EntityMixin(object):
pass
class CachingFactory(type):
__registry__ = {}
# Instead of declaring an inner class,
# we can also return type("Wrapper", (type_, EntityMixin), {}) right away,
# which, however, looks more obscure
def __makeclass(cls, type_):
class Wrapper(type_, EntityMixin):
pass
return Wrapper
# This is the simplest form of caching; for more realistic and less error-prone example,
# better use a more unique/complex key, for example, tuple of `value`'s ancestors --
# you can obtain them via type(value).__mro__
def __call__(cls, value):
t = type(value)
typename = t.__name__
if typename not in cls.__registry__:
cls.__registry__[typename] = cls.__makeclass(t)
return cls.__registry__[typename](value)
class Factory(object):
__metaclass__ = CachingFactory
This way, Factory(1) performs Factory.__call__(1), which is CachingFactory.__call__(1) (without metaclass, that'd be a constructor call instead, which would result in a class instance -- but we want to make a class first and only then instantiate it).
We can ensure that the objects created by Factory are the instances of the same class, which is crafted specifically for them at the first time:
>>> type(Factory(map(lambda x: x, [1, 2]))) is type(Factory([1]))
True
>>> type(Factory("a")) is type(Factory("abc"))
True
I'm using 'an illustrated guide to learning python 3' to learn python. Chapter 21 is about classes. In this chapter it uses 'self' aparently incorrectly? I tried writing my own code for an example, and it didn't work, so I input the example code and, surprisingly, it did not work either.
class CorrectChair:
'''blah'''
max_occupants = 4
def __init__(self, id):
self.id = id
self.count = 0
def load(self, number):
new_val = self.check(self.count + number)
self.count = new_val
def unload(self, number):
new_val - self._check(self.count - number)
self.count = new_val
def _check(self, number):
if number < 0 or number > self.max_occupants:
raise ValueError('Invalid count:{}'.format(number))
return number
It errors out into:
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
CorrectChair.load(1)
TypeError: load() missing 1 required positional argument:
'number'
It appears to not be recognizing the self argument.. How can I fix this? Googling has not helped, every example I see makes it look like this should work.
It should be adding (number) to self.count, instead it ignores that its self referential, and asks for a 2nd argument.
You must create an instance and call the methods on it:
replace CorrectChair.load(1) with:
c_chair = CorrectChair(some_id)
c_chair.load(1)
The load function is actually a object method. In the Python world, the first parameter of a object method always points to the instance, which will implicitly passes to the method before invoking. To call a object method, you first need to create a object, then invoke that method by the dot syntax. Autually
e.g
id = 3
newCorrectChair = CorrectChair(id)
# self is implicitly passed here, this style stems from C.
CorrectChair(id).load(10)
If you was trying to write a class method, you have to add a #classmethod decorator.
class CorrectChair:
# Blah...
#classmethod
def load(cls, num):
# do something
return
If you was trying to write a static function, you should decorate that method with #staticmethod decorator.
class CorrectChair:
# Blah...
#staticmethod
def load(cls, num):
# do something
return
The error is showing that you're trying to call a method straight from the class,
while the method expects an object reference as well.
Before you make any call for any of those methods that include 'self', you need to create an instance of that class first
In your case, the code should be:
correct_chair = CorrectChair(id)
correct_chair.load(1)
Comparing to the method in your class -
correct_chair corresponds to self, and 1 corresponds to 'number' in the method
def load(self, number):
new_val = self.check(self.count + number)
self.count = new_val
I am trying to work out how I can change functionality of __setattr__ of a class using a decorator on the class, but I am running into issue when trying to access self inside the function that replaces __setattr__. If I change the problmatic line to not access self, e.g. replacing it with val = str(val), I get the expected behaviour.
I see similar problems in other questions here, but they use a different approach, where a class is used as a decorater. My approach below feels less complicated, so I'd love to keep it like that if possible.
Why might a not be defined on self/foo where I expect it to be?
# Define the function to be used as decorator
# The decorator function accepts the relevant fieldname as argument
# and returns the function that wraps the class
def field_proxied(field_name):
# wrapped accepts the class (type) and modifies the functionality of
# __setattr__ before returning the modified class (type)
def wrapped(wrapped_class):
super_setattr = wrapped_class.__setattr__
# The new __setattr__ implementation makes sure that given an int,
# the fieldname becomes a string of that int plus the int in the
# `a` attribute
def setattr(self, attrname, val):
if attrname == field_name and isinstance(val, int):
val = str(self.a + val) # <-- Crash. No attribute `a`
super_setattr(self, attrname, val)
wrapped_class.__setattr__ = setattr
return wrapped_class
return wrapped
#field_proxied("b")
class Foo(object):
def __init__(self):
self.a = 2
self.b = None
foo = Foo()
# <-- At this point, `foo` has no attribute `a`
foo.b = 4
assert foo.b == "6" # Became a string
The problem is simple, you just need one line change.
def setattr(self, attrname, val):
if attrname == field_name and isinstance(val, int):
val = str(self.a + val)
super_setattr(self, attrname, val) # changed line
The reason is, in your original method, you will only call super_setattr when attrname == field_name. So self.a = 2 in __init__ doesn't work at all as "a" != "b".
Part A
I want to do some checking on arguments to a class instantiation and possibly return None if it doesn't make sense to even create the object.
I've read the docs but I don't understand what to return in this case.
class MyClass:
def __new__(cls, Param):
if Param == 5:
return None
else:
# What should 'X' be?
return X
What should X be in return X?
It cannot be self because the object doesn't exist yet so self is not a valid keyword in this context.
Part B
Tied to my question, I don't understand the need to have the cls parameter.
If you call the constructor of MyClass - var = MyClass(1) - won't cls always be MyClass?
How could it be anything else?
According to the docs, cls in object.__new__(cls[, ...]) is:
. . .the class of which an instance was requested as its first
argument.
(I'm assuming you are using Python 3 because you provided a link to Python 3 docs)
X could be super().__new__(cls).
super() returns the parent class (in this case it is simply object). Most of the times when you are overriding methods you will need to call the parent class's method at some point.
See this example:
class MyClass:
def __new__(cls, param):
if param == 5:
return None
else:
return super().__new__(cls)
def __init__(self, param):
self.param = param
And then:
a = MyClass(1)
print(a)
print(a.param)
>> <__main__.MyClass object at 0x00000000038964A8>
1
b = MyClass(5)
print(b)
print(b.param)
>> None
Traceback (most recent call last):
File "main.py", line 37, in <module>
print(b.param)
AttributeError: 'NoneType' object has no attribute 'param'
You could just return the instance of cls like this return object.__ new__(cls). Because every class is subclass of object, you can use that as a object creator for your class. The returnes object is passed as a first argument to the __init__() with the any number of positional argument or any number of keyword argument you passed to new. There you will create instance variable assigning those values.
I ran the code below, by calling the function in the constructor
First --
>>> class PrintName:
... def __init__(self, value):
... self._value = value
... printName(self._value)
... def printName(self, value):
... for c in value:
... print c
...
>>> o = PrintName('Chaitanya')
C
h
a
i
t
a
n
y
a
Once again I run this and I get this
>>> class PrintName:
... def __init__(self, value):
... self._value = value
... printName(self._value)
... def printName(self, value):
... for c in value:
... print c
...
>>> o = PrintName('Hello')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in __init__
NameError: global name 'printName' is not defined
Can I not call a function in the constructor? and whay a deviation in the execution of similar code?
Note: I forgot to call a function local to the class, by using self (ex: self.printName()). Apologize for the post.
You need to call self.printName since your function is a method belonging to the PrintName class.
Or, since your printname function doesn't need to rely on object state, you could just make it a module level function.
class PrintName:
def __init__(self, value):
self._value = value
printName(self._value)
def printName(value):
for c in value:
print c
Instead of
printName(self._value)
you wanted
self.printName(self._value)
It probably worked the first time because you had another function printName in a parent scope.
What you want is self.printName(self._value) in __init__, not just printName(self._value).
I know this is an old question, but I just wanted to add that you can also call the function using the Class name and passing self as the first argument.
Not sure why you'd want to though, as I think it might make things less clear.
class PrintName:
def __init__(self, value):
self._value = value
PrintName.printName(self, self._value)
def printName(self, value):
for c in value:
print(c)
See Chapter 9 of the python manuals for more info:
9.3.4. Method Objects
Actually, you may have guessed the answer: the special thing about methods is that the object is passed as the first argument of the function. In our example, the call x.f() is exactly equivalent to MyClass.f(x). In general, calling a method with a list of n arguments is equivalent to calling the corresponding function with an argument list that is created by inserting the method’s object before the first argument.