I am trying to extend a class whose name is 'Account' (from django-user-accounts app) with my own 'snAccount' class, since I need to add some extra fields to each user account. The problem comes when I try to override the "factory" method (#classmethod) of the parent class with mine:
# Original method
#classmethod
def create(cls, request=None, **kwargs):
...
...
# Override attempt
#classmethod
def create(cls, request=None, **kwargs):
create_email = kwargs.pop("create_email", True)
user = kwargs.pop("user", None)
acc = Account.create(request, user, create_email)
x_account = cls(account, **kwargs)
x_account.save()
return x_account
The problem I have throws the following exception:
Django Version: 1.4.5
Exception Type: TypeError
Exception Value: create() takes at most 2 arguments (4 given)
Exception Location: /home/.../WebServices/models.py in create, line 27
... which I cannot understand since the definition of that method takes 2 implicit arguments and **kwargs in addition. What am I doing wrong? I do not have much experience with Python, as you might see...
You haven't actually used any keyword arguments.
acc = Account.create(request, user=user, create_email=create_email)
Related
A Python library provides a function create_object that creates an object of type OriginalClass.
I would like to create my own class so that it takes the output of create_object and adds extra logic (on top of what create_object already does). Also, that new custom object should have all the properties of the base object.
So far I've attempted the following:
class MyClass(OriginalClass):
def __init__(self, *args, **kwargs):
super(MyClass, self).__init__(args, kwargs)
This does not accomplish what I have in mind. Since the function create_object is not called and the extra logic handled by it not executed.
Also, I do not want to attach the output of create_object to an attribute of MyClass like so self.myobject = create_object(), since I want it to be accessed by just the instantiation of an object of type MyClass.
What would be the best way to achieve that functionality in Python? Does that corresponds to an existing design pattern?
I am new to Python OOP so maybe the description provided is too vague. Please feel free to request in depth description from those vaguely described parts.
Try this:
class MyClass(OriginalClass):
def __init__(self, custom_arg, *args, **kwargs):
super().__init__(*args, **kwargs)
self.init(custom_arg)
def init(self, custom_arg):
# add subclass initialization logic here
self._custom_arg = custom_arg
def my_class_method(self):
pass
obj = create_object()
obj.__class__ = MyClass
obj.init(custom_arg)
obj.original_class_method()
obj.my_class_method()
You can change the __class__ attribute of an object if you know what you're doing.
If I was you I would consider using an Adapter design pattern. It's maybe longer to code, but it's easier to maintain and understand.
Looking at the original code, I would have implemented the create_object functions as class methods.
class SqueezeNet(nn.Module):
...
#classmethod
def squeezenet1_0(cls, *args, **kwargs):
def squeezenet1_0(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> SqueezeNet:
return cls._squeezenet('1_0', pretrained, progress, **kwargs)
#classmethod
def squeezenet1_1(cls, *args, **kwargs):
def squeezenet1_0(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> SqueezeNet:
return cls._squeezenet('1_1', pretrained, progress, **kwargs)
#classmethod
def _squeezenet(cls, version: str, pretrained: bool, progress: bool, **kwargs: Any) -> SqueezeNet:
model = cls(version, **kwargs)
if pretrained:
arch = 'squeezenet' + version
state_dict = load_state_dict_from_url(model_urls[arch],
progress=progress)
model.load_state_dict(state_dict)
return model
So what does the class method do? It just instantiates the object as normal, but then calls a particular method on it before returning it. As such, there's nothing to do in your subclass. Calling MySqueezeNetSubclass._squeezenet would instantiate your subclass, not SqueezeNet. If you need to customize anything else, you can override _squeezenet in your own class, using super()._squeezenet to do the parent creation first before modifying the result.
class MySubclass(SqueezeNet):
#classmethod
def _squeezenet(cls, *args, **kwargs):
model = super()._squeezenet(*args, **kwargs)
# Do MySubclass-specific things to model here
return model
But, _squeezenet isn't a class method; it's a regular function. There's not much you can do except patch it at runtime, which is hopefully something you can do before anything tries to call it. For example,
import torchvision.models.squeezenet
def _new_squeezenet(version, pertained, progress, **kwargs):
model = MySqueezeNetSubClass(version, **kwarsg)
# Maybe more changes specific to your code here. Specifically,
# you might want to provide your own URL rather than one from
# model_urls, etc.
if pretrained:
arch = 'squeezenet' + version
state_dict = load_state_dict_from_url(model_urls[arch],
progress=progress)
model.load_state_dict(state_dict)
return model
torchvision.models.squeezenet._squeezenet = _new_squeezenet
The lesson here is that not everything is designed to be easily subclassed.
Within Python, I have created a User class that may have one of two UserType's, Regular or Admin. The User class has multiple methods, and I want some of them to only be accessible by an admin.
Currently, I have this code:
from enum import Enum
class AuthorizationError(Exception):
"""Raised when a user attempts an admin-restricted task"""
class UserType(Enum):
Regular = 0
Admin = 1
class User:
def __init__(self, username, user_type):
self.username = username
self.user_type = user_type
def admin_required(self, func):
def wrapper(*args, **kwargs):
if self.user_type is UserType.Admin:
return func(*args, **kwargs)
else:
raise AuthorizationError(f"User must be an admin to use {func.__name__}.")
return wrapper
def do_something_regular(self):
print(f"{self.username} is doing something any regular user can do.")
#admin_required
def do_something_admin(self):
print(f"{self.username} is doing something only an admin can do.")
me = User("MyUsername", UserType.Admin)
me.do_something_regular()
me.do_something_admin()
Which yields the following error:
Traceback (most recent call last):
File "example.py", line 13, in <module>
class User:
File "example.py", line 31, in User
#admin_required
TypeError: admin_required() missing 1 required positional argument: 'func'
I understand I can probably create a subclass for an admin, but the goal is to use a decorator within the User class to check for admin privileges.
I think the problem is that when I wrap the do_something_admin function, do_something_admin is passed to the self argument instead of self being passed as the instance of the class.
I have not been able to solve this problem. Keep in mind, I want to use a decorator in the solution. Thank you!
This is a dirty trick but it will allow you to use your decorator on methods:
from enum import Enum
class AuthorizationError(Exception):
"""Raised when a user attempts an admin-restricted task"""
class UserType(Enum):
Regular = 0
Admin = 1
def admin_required(func):
def wrapper(self, *args, **kwargs):
if self.user_type is UserType.Admin:
return func(self,*args, **kwargs)
else:
raise AuthorizationError(f"User must be an admin to use {func.__name__}.")
return wrapper
class User:
def __init__(self, username, user_type):
self.username = username
self.user_type = user_type
def do_something_regular(self):
print(f"{self.username} is doing something any regular user can do.")
#admin_required
def do_something_admin(self):
print(f"{self.username} is doing something only an admin can do.")
me = User("MyUsername", UserType.Admin)
me.do_something_regular()
me.do_something_admin()
The self argument is only populated when a method is bound to an instance of a class.
In this case, the referenced function (the decorator) is unbound and therefore the first argument (self) needs to be provided.
If you remove the self argument this will work eg:
class User:
...
def admin_required(func):
def wrapper(self, *args, **kwargs):
if self.user_type is UserType.Admin:
return func(self, *args, **kwargs)
else:
raise AuthorizationError(f"User must be an admin to use {func.__name__}.")
return wrapper
...
The Python docs state:
Programs may name their own exceptions by creating a new exception
class (see Classes for more about Python classes). Exceptions should
typically be derivedfrom the Exception class, either directly or
indirectly.
...
When creating a module that can raise several distinct errors, a
common practice is to create a base class for exceptions defined by
that module, and subclass that to create specific exception classes
for different error conditions.
From Python’s super() considered super!:
Each level strips-off the keyword arguments that it needs so that the
final empty dict can be sent to a method that expects no arguments at
all (for example, object.init expects zero arguments)
Suppose I have the following StudentValueError and MissingStudentValue exceptions.
class StudentValueError(Exception):
"""Base class exceptions for Student Values"""
def __init__(self, message, **kwargs):
super().__init__(**kwargs)
self.message = message # You must provide at least an error message.
class MissingStudentValue(StudentValueError):
def __init__(self, expression, message, **kwargs):
super().__init__(message, **kwargs)
self.expression = expression
def __str__(self):
return "Message: {0} Parameters: {1}".format(self.message, self.expression)
I want to create exceptions that are co-operative. I have two questions:
In that case, the Exception class constructor expects zero arguments (empty dict), correct?
Does my example violate LSP?
The accepted answer provided here inherits from ValueError.
Exception takes no keyword arguments, it takes only variable amount of positional parameters via *args, so you need to change **kwargs to *args. Also I would recommend to pass message and expression together with *args to super() call. After all, the example, which probably doesn't violate LSP:
class StudentValueError(Exception):
"""Base class exceptions for Student Values"""
def __init__(self, message='', *args):
super().__init__(message, *args)
self.message = message
class MissingStudentValue(StudentValueError):
def __init__(self, message='', expression='', *args):
super().__init__(message, expression, *args)
self.expression = expression
def __str__(self):
return "Message: {0} Parameters: {1}".format(self.message, self.expression)
e = Exception('message', 'expression', 'yet_another_argument')
print(e)
e = StudentValueError('message', 'expression', 'yet_another_argument')
print(e)
e = MissingStudentValue('message', 'expression', 'yet_another_argument')
print(e)
e = MissingStudentValue()
print(e)
I have the following method:
def _loginEventHandler(cmdID, *args):
if cmdID == Login.LOGIN_LOGED:
user = args[0]
print("User",user.userTypeID,"logged in")
that method is called like this from a different module:
user = User(nUserSelected)
_loginEventHandler(Login.LOGIN_LOGED,user)
the interpreter throws an AttributeError:
file "/main.py", line 79, in _loginEventHandler
print("User",user.userTypeID,"logged in")
AttributeError: 'tuple' object has no attribute 'userTypeID'
The question is what is the proper way of taking arguments from *args (specially if they are custom types like with "User") and why is it taking a tuple from args[0]
You didn't include self in the definition of the method. The first argument passed to a method is always the instance itself. That means that in your method, cmdID is taking the value of the instance, and the first element of args is actually the value of Login.LOGIN_LOGED, which is presumably a tuple.
So I tried to come up with a minimal version of the User class and a Login Enum. But I don't see any problems here. The output seems okay
from enum import Enum
class Login(Enum):
LOGIN_LOGED = 1
class User:
def __init__(self, userTypeID):
self.userTypeID = userTypeID
user = User(1)
_loginEventHandler(Login.LOGIN_LOGED, user)
which gives
('User', 1, 'logged in')
How can I write a mixin, which raises an Exception if the class which is using this specific mixin is not created properly.
If I do these checks and balances in the __init__ or __new__ methods of the mixin, Exception is raised when this erroneous class tries to create an instance. Which is late, ideally the exception needs to be thrown when the compiler detects a wrong class. (Assuming, how to detect if a class is acceptable or not is a trivial matter)
To Illustrate the question
class ASampleMixin:
"""
A sample docstring
"""
def a_method(self):
raise NotImplementedError
def class_rule(self):
if something is wrong:
return False
return True
# more methods
class AClass(ASampleMixin, BaseClass):
"""
This class should satisfy a condition specified in class_rule method of the mixin
"""
# some methods
I am right now performing the check in the init method of mixin. Which raises an exception if rule returns False. Now this needs to be done at the time AClass is read by interpreter and not when I try to create an instance of AClass.
Is it possible even in dynamically typed languages like Python 3.5?
This sounds as if you want to create a custom metaclass that performs the check upon creation of the class object. See the documentation for metaclasses.
A metaclass example as reference:
class CustomType(type):
def __call__(cls, *args, **kwargs):
if not CustomType.some_rule(kwargs.pop('some_attr', None)):
raise Exception('Abort! Abort!')
return super(CustomType, cls).__call__(*args, **kwargs)
#staticmethod
def some_rule(var):
if type(var) is not str:
return False
return True
class A(object):
__metaclass__ = CustomType
class B(A):
pass
b = B(some_attr='f') # all is well
b = B() # raises