I am struggling finding a way to have a class Factory (I use factory_boy version 2.11.1 with Python 3) using an alternate constructor defined as a #classmethod.
So let's say we have a class for building a 2D-point object with a default constructor and 2 additional ones:
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
#classmethod
def fromlist(cls, coords): # alternate constructor from list
return cls(coords[0], coords[1])
#classmethod
def duplicate(cls, obj): # alternate constructor from another Point
return cls(obj.x, obj.y)
I create a basic Point factory:
import factory
class PointFactory(factory.Factory):
class Meta:
model = Point
inline_args = ('x', 'y')
x = 1.
y = 2.
By default, it seems to call the constructor __init__ of the class which seems very logical. I could not find a way to pass inline_args as being coords for using the alternate constructor fromlist. Is there a way to do so?
This is my first experience working and building factories in general so I may also be looking up at the wrong keywords on the web...
The point of factory_boy is to make it easy to produce test instances. You'd just call PointFactory() and you are done, you have test instances for the rest of your code. This usecase doesn't need to use any of the alternative constructors, ever. The factory would just use the main constructor.
If you are thinking that factory_boy factories must be defined to test your extra constructors, then you have misunderstood their use. Use factory_boy factories to create test data for other code to be tested. You'd not use them to test the Point class (other than to generate test data to pass to one of your constructors).
Note that inline_args is only needed if your constructor doesn't accept keyword arguments at all. Your Point() class has no such restriction; x and y can be used both as positional and as keyword arguments. You can safely drop inline_args from your definition, the factory will work regardless.
If you must use one of the other constructors (because you can't create test data with the main constructor), just pass the specific constructor method in as the model:
class PointListFactory(factory.Factory):
class Meta:
model = Point.fromlist
coords = (1., 2.)
Related
I'm trying to wrap my head around how to utilize inheritance in some code I'm writing for an API. I have the following parent class which holds a bunch of common variables that I'd like to instantiate once, and inherit with other classes to make my code look cleaner:
class ApiCommon(object):
def __init__(self, _apikey, _serviceid=None, _vclversion=None,
_aclname=None, _aclid=None):
self.BaseApiUrl = "https://api.fastly.com"
self.APIKey = _apikey
self.headers = {'Fastly-Key': self.APIKey}
self.ServiceID = _serviceid
self.VCLVersion = _vclversion
self.ACLName = _aclname
self.ACLid = _aclid
self.Data = None
self.IP = None
self.CIDR = None
self.fullurl = None
self.r = None
self.jsonresp = None
self.ACLcomment = None
self.ACLentryid = None
And I am inheriting it in another class below, like so in a lib file called lib/security.py:
from apicommon import ApiCommon
class EdgeAclControl(ApiCommon):
def __init__(self):
super(EdgeAclControl, self).__init__()
...
def somemethodhere(self):
return 'stuff'
When I instantiate an object for ApiCommon(object), I can't access the methods in EdgeAclControl(ApiCommon). Example of what I'm trying which isn't working:
from lib import security
gza = security.ApiCommon(_aclname='pytest', _apikey='mykey',
_serviceid='stuffhere', _vclversion=5)
gza.somemethodhere()
How would I instantiate ApiCommon and have access to the methods in EdgeAclControl?
Your current code appears to be trying to use inheritance backwards. When you create an instance of ApiCommon, it will only get the methods defined in that base class. If you want to get methods from a subclass, you need to create an instance of the subclass instead.
So the first fix you need to make is to change gza = security.ApiCommon(...) to gza = EdgeAclControl(...) (though depending on how you're doing your imports, you might need to prefix the class name with a module).
The second issue is that your EdgeAclControl class doesn't take the arguments that its base class needs. Your current code doesn't pass any arguments to super(...).__init__, which doesn't work since the _apikey parameter is required. You could repeat all the arguments again in the subclass, but a lot of the time it's easier to use variable-argument syntax instead.
I suggest that you change EdgeAclControl.__init__ to accept *args and/or **kwargs and pass on those variable arguments when it calls its parent's __init__ method using super. That would look like this:
def __init__(self, *args, **kwargs):
super(EdgeAclControl, self).__init__(*args, **kwargs)
Note that if, as in this example, you're not doing anything other than calling the parent __init__ method in the derived __init__ method, you could get the same effect by just deleting the derived version entirely!
It's likely that your real code does something in EdgeAclControl.__init__, so you may need to keep it in some form. Note that it can take arguments normally in addition to the *args and **kwargs. Just remember to pass on the extra arguments, if necessary, when calling the base class.
May I ask why you have to instantiate an ApiCommon object? I don't see any point of doing so.
If you insist doing that, you have to add methods in superclass and then subclass may override theses methods. But you still couldn't access methods of EdgeAclControl from ApiCommon object
Note: although my particular use is Flask related, I think the question is more general.
I am building a Flask web application meant to be customized by the user. For example, the user is expected to provide a concrete subclass of a DatabaseInterface and may add to the list of certain ModelObjects that the application knows how to handle.
What is the best way to expose the various hooks to users, and indicate required and optional status? 'Best' here primarily means most 'pythonic', or "easiest for python users to grasp", but other criteria like not causing headaches down the road are certainly worth mentioning.
Some approaches I've considered:
Rely solely on documentation
Create a template file with documented overrides, much like default config files for many servers. E.g.
app = mycode.get_app()
##Add your list of extra foo classes here
#app.extra_foos = []
Create a UserOverrides class with an attr/method for each of the hooks; possibly split into RequiredOverrides and OptionalOverrides
Create an empty class with unimplemented methods that the user must subclass into a concrete instance
One method is by using abstract base classes (abc module). For example, you can define an ABC with abstract methods that must be overridden by child classes like this:
from abc import ABC
class MyClass(ABC): # inherit from ABC
def __init__(self):
pass
#abstractmethod
def some_method(self, args):
# must be overridden by child class
pass
You would then implement a child class like:
class MyChild(MyClass):
# uses parent's __init__ by default
def some_method(self, args):
# overrides the abstract method
You can specify what everything needs to do in the overridden methods with documentation. There are also decorators for abstract properties, class methods, and static methods. Attempting to instantiate an ABC that does not have all of its abstract methods/properties overridden will result in an error.
Inheritance. Is. Bad.
This is especially true in Python, which gives you a nice precedent to avoid the issue. Consider the following code:
len({1,2,3}) # set with length 3
len([1,2,3]) # list with length 3
len((1,2,3)) # tuple with length 3
Which is cool and all for the built-in data structures, but what if you want to make your own data structure and have it work with Python's len? Simple:
class Duple(object):
def __init__(self, fst, snd):
super(Duple, self).__init__()
self.fst = fst
self.snd = snd
def __len__():
return 2
A Duple is a two-element (only) data structure (calling it with more or fewer arguments raises) and now works with len:
len(Duple(1,2)) # 2
Which is exactly how you should do this:
def foo(arg):
return arg.__foo__()
Any class that wants to work with your foo function just implements the __foo__ magic method, which is how len works under the hood.
I have class Base. I'd like to extend its functionality in a class Derived. I was planning to write:
class Derived(Base):
def __init__(self, base_arg1, base_arg2, derived_arg1, derived_arg2):
super().__init__(base_arg1, base_arg2)
# ...
def derived_method1(self):
# ...
Sometimes I already have a Base instance, and I want to create a Derived instance based on it, i.e., a Derived instance that shares the Base object (doesn't re-create it from scratch). I thought I could write a static method to do that:
b = Base(arg1, arg2) # very large object, expensive to create or copy
d = Derived.from_base(b, derived_arg1, derived_arg2) # reuses existing b object
but it seems impossible. Either I'm missing a way to make this work, or (more likely) I'm missing a very big reason why it can't be allowed to work. Can someone explain which one it is?
[Of course, if I used composition rather than inheritance, this would all be easy to do. But I was hoping to avoid the delegation of all the Base methods to Derived through __getattr__.]
Rely on what your Base class is doing with with base_arg1, base_arg2.
class Base(object):
def __init__(self, base_arg1, base_arg2):
self.base_arg1 = base_arg1
self.base_arg2 = base_arg2
...
class Derived(Base):
def __init__(self, base_arg1, base_arg2, derived_arg1, derived_arg2):
super().__init__(base_arg1, base_arg2)
...
#classmethod
def from_base(cls, b, da1, da2):
return cls(b.base_arg1, b.base_arg2, da1, da2)
The alternative approach to Alexey's answer (my +1) is to pass the base object in the base_arg1 argument and to check, whether it was misused for passing the base object (if it is the instance of the base class). The other agrument can be made technically optional (say None) and checked explicitly when decided inside the code.
The difference is that only the argument type decides what of the two possible ways of creation is to be used. This is neccessary if the creation of the object cannot be explicitly captured in the source code (e.g. some structure contains a mix of argument tuples, some of them with the initial values, some of them with the references to the existing objects. Then you would probably need pass the arguments as the keyword arguments:
d = Derived(b, derived_arg1=derived_arg1, derived_arg2=derived_arg2)
Updated: For the sharing the internal structures with the initial class, it is possible using both approaches. However, you must be aware of the fact, that if one of the objects tries to modify the shared data, the usual funny things can happen.
To be clear here, I'll make an answer with code. pepr talks about this solution, but code is always clearer than English. In this case Base should not be subclassed, but it should be a member of Derived:
class Base(object):
def __init__(self, base_arg1, base_arg2):
self.base_arg1 = base_arg1
self.base_arg2 = base_arg2
class Derived(object):
def __init__(self, base, derived_arg1, derived_arg2):
self.base = base
self.derived_arg1 = derived_arg1
self.derived_arg2 = derived_arg2
def derived_method1(self):
return self.base.base_arg1 * self.derived_arg1
I am trying to define a number of classes based on an abstract base class. Each of these classes basically defines a cell shape for a visualisation package. The cell is comprised of a number of vertices (points) and each subclass will require a different number of points. Each class can be thought of as a container for a fixed number of point coordinates.
As an example, consider the base class Shape, which is simply a container for a list of coordinates:
class Shape(object):
"""Cell shape base class."""
def __init__(self, sequence):
self.points = sequence
#property
def points(self):
return self._points
#points.setter
def points(self, sequence):
# Error checking goes here, e.g. check that `sequence` is a
# sequence of numeric values.
self._points = sequence
Ideally I want to be able to define, say, a Square class, where the points.setter method checks that sequence is of length four. Furthermore I would like a user to not be able to instantiate Shape. Is there a way I can define Shape to be an abstract base class? I have tried changing the definition of shape to the following:
import abc
class Shape(object):
"""Cell shape base class."""
__metaclass__ = abc.ABCMeta
def __init__(self, sequence):
self.points = sequence
#abc.abstractproperty
def npoints(self):
pass
#property
def points(self):
return self._points
#points.setter
def points(self, sequence):
# Error checking goes here...
if len(sequence) != self.npoints:
raise TypeError('Some descriptive error message!')
self._points = sequence
This requires subclasses to define the property npoints. I can then define a class Square as
class Square(Shape):
#property
def npoints(self):
return 4
However, this would be rather tedious to implement for a large number of sublcasses (and with more than one property to implement). I was hoping to define a class factory which would create my subclasses for me, something along the lines of:
def Factory(name, npoints):
return type(name, (Shape,), dict(npoints=npoints))
Triangle = Factory('Triangle', 3)
Square = Factory('Square', 4)
# etc...
Is this class factory function a valid approach to take, or am I clobbering the npoints property? Is it better to replace the call to type with something more verbose like:
def Factory(name, _npoints):
class cls(Shape):
#property
def npoints(self):
return _npoints
cls.__name__ = name
return cls
An alternative approach would be to define a class attribute _NPOINTS and change the npoints
property of Shape to
#property
def npoints(self):
return _NPOINTS
However, then I loose the benefit of using an abstract base class since:
I can't see how to define a class attribute using type, and
I don't know how to define an abstract class attribute.
Does anyone have any thoughts on the best way to implement this abstract base class and class factory function, or even an altogether better design?
Without knowing more about your project, I cannot give specific advice on the general design. I will just provide a few more general hints and thoughts.
Dynamically generated classes are often a sign that you don't need separate classes at all – simply write a single class that incorparates all the functionality. What's the problem with a Shape class that gets it's properties at instantiation time? (Of course there are reasons to use dynamically generated classes – the namedtuple() factory function is one example. I couldn't find any specific reasons in your question, however.)
Instead of using abstract base classes, you often simply document the intended interface, and than write classes conforming to this interface. Due to the dynamic nature of Python, you don't strictly need a common base class. There are often other advantages to a common base class – for example shared functionality.
Only check for application code errors if not doing so leads to strange errors in unrelated places. If, say, your function expects an iterable, simply assume you got an iterable. If the user passed in something else, you code will fail when it tries to iterate the passed in object anyway, and the error message will usually be enough for the application developer to understand the error.
I want to have compact class based python DSLs in the following form:
class MyClass(Static):
z = 3
def _init_(cls, x=0):
cls._x = x
def set_x(cls, x):
cls._x = x
def print_x_plus_z(cls):
print cls._x + cls.z
#property
def x(cls):
return cls._x
class MyOtherClass(MyClass):
z = 6
def _init_(cls):
MyClass._init_(cls, x=3)
I don't want to write MyClass() and MyOtherClass() afterwards. Just want to get this working with only class definitions.
MyClass.print_x_plus_z()
c = MyOtherClass
c.z = 5
c.print_x_plus_z()
assert MyOtherClass.z == 5, "instances don't share the same values!"
I used metaclasses and managed to get _init_, print_x and subclassing working properly, but properties don't work.
Could anyone suggest better alternative?
I'm using Python 2.4+
To give a class (as opposed to its instances) a property, you need to have that property object as an attribute of the class's metaclass (so you'll probably need to make a custom metaclass to avoid inflicting that property upon other classes with the same metaclass). Similarly for special methods such as __init__ -- if they're on the class they'd affect the instances (which you don't want to make) -- to have them affect the class, you need to have them on the (custom) metaclass. What are you trying to accomplish by programming everything "one metalevel up", i.e., never-instantiated class with custom metaclass rather than normal instances of a normal class? It just seems a slight amount of extra work for no returns;-).