Calling super() with arguments set in sub-class __init__()? - python

So I have a class (let's call it ParamClass) which requires a parameter for initialization, and that parameter is something that should be available to the user to configure via some option-setting interface.
ParamClass knows nothing about the configuration interface or how to read them. So I made another class, called Configurator, which does all of that. When a class inherits from Configurator and tells it what configuration keys to read, Configurator's __init__() method will read those keys and assign their values to the correct attributes in self.
The problem I run into, however, is that when I try to pass arguments to super(), including the parameters to be read by Configurator, those parameters have no value yet. But they are passed as constants in the argument list to the super(). Example shown below. MyClass.__init__() can't even get started because self.server_param doesn't exist yet.
class ParamClass:
"""Just some class that needs a parameter for init"""
def __init__(self, param1, **kwargs) -> None:
super().__init__(**kwargs)
self.value = param1
class Configurator:
"""Reads parameters from a configuration source and sets appropriate class
variables.
"""
def __init__(self, **kwargs) -> None:
super().__init__(**kwargs)
self.server_param = 2
class MyClass(Configurator, ParamClass):
def __init__(self, **kwargs) -> None:
super().__init__(param1=self.server_param, **kwargs)
# <-- Gives AttributeError: 'MyClass' object has no attribute 'server_param'
MyClass()
The only way I can get this to work is to break MRO in Configurator.init() and force the order of initilization. This is bad for obvious reason - I plan to use Configurator throughout my code and can't break MRO with it.
class ParamClass:
"""Just some class that needs a parameter for init"""
def __init__(self, param1, **kwargs) -> None:
super().__init__(**kwargs)
self.value = param1
class Configurator:
"""Reads parameters from a configuration source and sets appropriate class
variables.
"""
def __init__(self, **kwargs) -> None:
# super().__init__(**kwargs)
self.server_param = 2
class MyClass(Configurator, ParamClass):
def __init__(self, **kwargs) -> None:
Configurator.__init__(self, **kwargs)
# <-- After this call, self.server_param is defined.
ParamClass.__init__(self, param1=self.server_param, **kwargs)
MyClass()
How do I accomplish configuration of parameters in while user super? How do I do this in a generalized way that doesn't require Configurator to know little details about ParamClass?
Note: In my particular case, I don't "own" the ParamClass() code. It is library code that I'm using.

Related

Is it possible to use a parent class method as a decorator for a child class method?

I have two classes, Manager and DataManager, simplified in the example below:
import numpy as np
class Manager:
def __init__(self, value, delay_init=True):
self.value = value
self.is_init = False
self.data = None
if not delay_init:
self._initialize()
#staticmethod
def delayed_init(fn):
def wrapped_delayed_init(obj, *args, **kwargs):
if not obj.is_init:
obj.data = np.random.randn(obj.value, obj.value)
obj.is_init = True
return fn(obj, *args, **kwargs)
return wrapped_delayed_init
#delayed_init.__get__(object)
def _initialize(self):
pass
class DataManager(Manager):
def __init__(self, value):
super().__init__(value)
#Manager.delayed_init
def calculate_mean(self):
return np.mean(self.data)
data_manager = DataManager(100)
assert data_manager.data is None
mean = data_manager.calculate_mean()
What my code needs to do is pass the method calculate as an argument to some other function as part of a test suite. In order to do this I need to create an instance of DataManager. What I must avoid is the time incurred by the full instance creation (since it involved downloading data), and so I delegate this task to some function in the parent class called delayed_init. There are a subset of methods belonging to DataManager that require this delayed_init to have been run, and so I choose to decorate them with delayed_init to ensure it is run whenever 1) another method requires it and 2) it has not already been run.
Now my problem: Currently it appears I need to explicitly define the decorator as #Manager.delayed_init, but this can be re-written as #<parent>.delayed_init. I would like to write it this way if possible given that in my opinion it is cleaner to not have to explicitly write out a given type if the type is always the parent. However, I cannot find a way to properly reference the parent class before an instance/object is created. Is it possible to access the parent class without the creation of any instances?
Thank you for the assistance.

Wrap object in custom Python and then add extra logic

A Python library provides a function create_object that creates an object of type OriginalClass.
I would like to create my own class so that it takes the output of create_object and adds extra logic (on top of what create_object already does). Also, that new custom object should have all the properties of the base object.
So far I've attempted the following:
class MyClass(OriginalClass):
def __init__(self, *args, **kwargs):
super(MyClass, self).__init__(args, kwargs)
This does not accomplish what I have in mind. Since the function create_object is not called and the extra logic handled by it not executed.
Also, I do not want to attach the output of create_object to an attribute of MyClass like so self.myobject = create_object(), since I want it to be accessed by just the instantiation of an object of type MyClass.
What would be the best way to achieve that functionality in Python? Does that corresponds to an existing design pattern?
I am new to Python OOP so maybe the description provided is too vague. Please feel free to request in depth description from those vaguely described parts.
Try this:
class MyClass(OriginalClass):
def __init__(self, custom_arg, *args, **kwargs):
super().__init__(*args, **kwargs)
self.init(custom_arg)
def init(self, custom_arg):
# add subclass initialization logic here
self._custom_arg = custom_arg
def my_class_method(self):
pass
obj = create_object()
obj.__class__ = MyClass
obj.init(custom_arg)
obj.original_class_method()
obj.my_class_method()
You can change the __class__ attribute of an object if you know what you're doing.
If I was you I would consider using an Adapter design pattern. It's maybe longer to code, but it's easier to maintain and understand.
Looking at the original code, I would have implemented the create_object functions as class methods.
class SqueezeNet(nn.Module):
...
#classmethod
def squeezenet1_0(cls, *args, **kwargs):
def squeezenet1_0(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> SqueezeNet:
return cls._squeezenet('1_0', pretrained, progress, **kwargs)
#classmethod
def squeezenet1_1(cls, *args, **kwargs):
def squeezenet1_0(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> SqueezeNet:
return cls._squeezenet('1_1', pretrained, progress, **kwargs)
#classmethod
def _squeezenet(cls, version: str, pretrained: bool, progress: bool, **kwargs: Any) -> SqueezeNet:
model = cls(version, **kwargs)
if pretrained:
arch = 'squeezenet' + version
state_dict = load_state_dict_from_url(model_urls[arch],
progress=progress)
model.load_state_dict(state_dict)
return model
So what does the class method do? It just instantiates the object as normal, but then calls a particular method on it before returning it. As such, there's nothing to do in your subclass. Calling MySqueezeNetSubclass._squeezenet would instantiate your subclass, not SqueezeNet. If you need to customize anything else, you can override _squeezenet in your own class, using super()._squeezenet to do the parent creation first before modifying the result.
class MySubclass(SqueezeNet):
#classmethod
def _squeezenet(cls, *args, **kwargs):
model = super()._squeezenet(*args, **kwargs)
# Do MySubclass-specific things to model here
return model
But, _squeezenet isn't a class method; it's a regular function. There's not much you can do except patch it at runtime, which is hopefully something you can do before anything tries to call it. For example,
import torchvision.models.squeezenet
def _new_squeezenet(version, pertained, progress, **kwargs):
model = MySqueezeNetSubClass(version, **kwarsg)
# Maybe more changes specific to your code here. Specifically,
# you might want to provide your own URL rather than one from
# model_urls, etc.
if pretrained:
arch = 'squeezenet' + version
state_dict = load_state_dict_from_url(model_urls[arch],
progress=progress)
model.load_state_dict(state_dict)
return model
torchvision.models.squeezenet._squeezenet = _new_squeezenet
The lesson here is that not everything is designed to be easily subclassed.

How to handle parameters with polymorphism in python?

I want to use polymorphism and have a Movil class as my parent class, and while specific class is Car:
class Movil:
def __init__(self, name):
self._name = name
class Car(Movil):
def __init__(self, code):
super().__init__()
self.code = code
Since every Movil takes a name and every Movil takes a code and is a car, I expect to be able to pass both:
class Main(object):
def main(self):
a=Car('toyota','001')
if __name__ == "__main__":
Main().main()
But I am getting this error:
Traceback (most recent call last):
File "main", line 1, in <module>
File "main", line 3, in main
TypeError: __init__() takes 2 positional arguments but 3 were given
What is wrong with this code?
TLDR: Method parameters are not "inherited" when a child class overrides a parent method. The child class method must explicitly take and forward the parameter:
class Car(Movil):
def __init__(self, name, code):
super().__init__(name)
self.code = code
Inheritance only integrates attributes and methods of the base class into the child class. Notably, if the child class redefines an attribute or method, this hides ("shadows") the parent attribute/method completely.
For example, if Car would not define its own __init__ then Movil.__init__ would be used. Related and derived features – such as "the parameters of __init__" – are not themselves inherited: they only show up because they belong to the inherited attribute/method.
Since Car does define its own __init__, this shadows Movil.__init__ including its related features, such as the parameters.
In order for Car to take the original name parameter, it must be explicitly re-defined on Car.__init__:
class Car(Movil):
# v take `name` parameter of Movil.__init__
def __init__(self, name, code):
# v pass on `name` parameter to Movil.__init__
super().__init__(name)
self.code = code
As an alternative, variadic positional (*args) or keyword (**kwargs) parameters may be used to forward all unknown arguments:
class Car(Movil):
# v collect unknown arguments
def __init__(self, code, *args, **kwargs):
# v pass on unknown arguments to Movil.__init__
super().__init__(*args, **kwargs)
self.code = code
a = Car("001", "toyota")
b = Car(name="toyota", code="001")
Be mindful that variadic parameters make it difficult or impossible to replicate some patterns using positional-or-keyword parameters. For example, in the above example it is not possible to accept code as both a keyword or trailing positional argument, as is possible with the explicit definition.

How to execute BaseClass method before it gets overridden by DerivedClass method in Python

I am almost sure that there is a proper term for what I want to do but since I'm not familiar with it, I will try to describe the whole idea explicitly. So what I have is a collection of classes that all inherit from one base class. All the classes consist almost entirely of different methods that are relevant within each class only. However, there are several methods that share similar name, general functionality and also some logic but their implementation is still mostly different. So what I want to know is whether it's possible to create a method in a base class that will execute some logic that is similar to all the methods but still continue the execution in the class specific method. Hopefully that makes sense but I will try to give a basic example of what I want.
So consider a base class that looks something like that:
class App(object):
def __init__(self, testName):
self.localLog = logging.getLogger(testName)
def access(self):
LOGIC_SHARED
And an example of a derived class:
class App1(App):
def __init__(self, testName):
. . .
super(App1, self).__init__(testName)
def access(self):
LOGIC_SPECIFIC
So what I'd like to achieve is that the LOGIC_SHARED part in base class access method to be executed when calling the access method of any App class before executing the LOGIC_SPECIFIC part which is(as it says) specific for each access method of all derived classes.
If that makes any difference, the LOGIC_SHARED mostly consists of logging and maintenance tasks.
Hope that is clear enough and the idea makes sense.
NOTE 1:
There are class specific parameters which are being used in the LOGIC_SHARED section.
NOTE 2:
It is important to implement that behavior using only Python built-in functions and modules.
NOTE 3:
The LOGIC_SHARED part looks something like that:
try:
self.localLog.info("Checking the actual link for %s", self.application)
self.link = self.checkLink(self.application)
self.localLog.info("Actual link found!: %s", self.link)
except:
self.localLog.info("No links found. Going to use the default link: %s", self.link)
So, there are plenty of specific class instance attributes that I use and I'm not sure how to use these attributes from the base class.
Sure, just put the specific logic in its own "private" function, which can overridden by the derived classes, and leave access in the Base.
class Base(object):
def access(self):
# Shared logic 1
self._specific_logic()
# Shared logic 2
def _specific_logic(self):
# Nothing special to do in the base class
pass
# Or you could even raise an exception
raise Exception('Called access on Base class instance')
class DerivedA(Base):
# overrides Base implementation
def _specific_logic(self):
# DerivedA specific logic
class DerivedB(Base):
# overrides Base implementation
def _specific_logic(self):
# DerivedB specific logic
def test():
x = Base()
x.access() # Shared logic 1
# Shared logic 2
a = DerivedA()
a.access() # Shared logic 1
# Derived A specific logic
# Shared logic 2
b = DerivedB()
b.access() # Shared logic 1
# Derived B specific logic
# Shared logic 2
The easiest method to do what you want is to simply call the parent's class access method inside the child's access method.
class App(object):
def __init__(self, testName):
self.localLog = logging.getLogger(testName)
def access(self):
LOGIC_SHARED
class App1(App):
def __init__(self, testName):
super(App1, self).__init__(testName)
def access(self):
App.access(self)
# or use super
super(App1, self).access()
However, your shared functionality is mostly logging and maintenance. Unless there is a pressing reason to put this inside the parent class, you may want to consider is to refactor the shared functionality into a decorator function. This is particularly useful if you want to reuse similar logging and maintenance functionality for a range of methods inside your class.
You can read more about function decorators here: http://www.artima.com/weblogs/viewpost.jsp?thread=240808, or here on Stack Overflow: How to make a chain of function decorators?.
def decorated(method):
def decorated_method(self, *args, **kwargs):
LOGIC_SHARED
method(self, *args, **kwargs)
return decorated_method
Remember than in python, functions are first class objects. That means that you can take a function and pass it as a parameter to another function. A decorator function make use of this. The decorator function takes another function as a parameter (here called method) and then creates a new function (here called decorated_method) that takes the place of the original function.
Your App1 class then would look like this:
class App1(App):
#logged
def access(self):
LOGIC_SPECIFIC
This really is shorthand for this:
class App1(App):
def access(self):
LOGIC_SPECIFIC
decorated_access = logged(App.access)
App.access = decorated_access
I would find this more elegant than adding methods to the superclass to capture shared functionality.
If I understand well this commment (How to execute BaseClass method before it gets overridden by DerivedClass method in Python) you want that additional arguments passed to the parent class used in derived class
based on Jonathon Reinhart's answer
it's how you could do
class Base(object):
def access(self,
param1 ,param2, #first common parameters
*args, #second positional parameters
**kwargs #third keyword arguments
):
# Shared logic 1
self._specific_logic(param1, param2, *args, **kwargs)
# Shared logic 2
def _specific_logic(self, param1, param2, *args, **kwargs):
# Nothing special to do in the base class
pass
# Or you could even raise an exception
raise Exception('Called access on Base class instance')
class DerivedA(Base):
# overrides Base implementation
def _specific_logic(self, param1, param2, param3):
# DerivedA specific logic
class DerivedB(Base):
# overrides Base implementation
def _specific_logic(self, param1, param2, param4):
# DerivedB specific logic
def test():
x = Base()
a = DerivedA()
a.access("param1", "param2", "param3") # Shared logic 1
# Derived A specific logic
# Shared logic 2
b = DerivedB()
b.access("param1", "param2", param4="param4") # Shared logic 1
# Derived B specific logic
# Shared logic 2
I personally prefer Jonathon Reinhart's answer, but seeing as you seem to want more options, here's two more. I would probably never use the metaclass one, as cool as it is, but I might consider the second one with decorators.
With Metaclasses
This method uses a metaclass for the base class that will force the base class's access method to be called first, without having a separate private function, and without having to explicitly call super or anything like that. End result: no extra work/code goes into inheriting classes.
Plus, it works like maaaagiiiiic </spongebob>
Below is the code that will do this. Here http://dbgr.cc/W you can step through the code live and see how it works :
#!/usr/bin/env python
class ForceBaseClassFirst(type):
def __new__(cls, name, bases, attrs):
"""
"""
print("Creating class '%s'" % name)
def wrap_function(fn_name, base_fn, other_fn):
def new_fn(*args, **kwargs):
print("calling base '%s' function" % fn_name)
base_fn(*args, **kwargs)
print("calling other '%s' function" % fn_name)
other_fn(*args, **kwargs)
new_fn.__name__ = "wrapped_%s" % fn_name
return new_fn
if name != "BaseClass":
print("setting attrs['access'] to wrapped function")
attrs["access"] = wrap_function(
"access",
getattr(bases[0], "access", lambda: None),
attrs.setdefault("access", lambda: None)
)
return type.__new__(cls, name, bases, attrs)
class BaseClass(object):
__metaclass__ = ForceBaseClassFirst
def access(self):
print("in BaseClass access function")
class OtherClass(BaseClass):
def access(self):
print("in OtherClass access function")
print("OtherClass attributes:")
for k,v in OtherClass.__dict__.iteritems():
print("%15s: %r" % (k, v))
o = OtherClass()
print("Calling access on OtherClass instance")
print("-------------------------------------")
o.access()
This uses a metaclass to replace OtherClass's access function with a function that wraps a call to BaseClass's access function and a call to OtherClass's access function. See the best explanation of metaclasses here https://stackoverflow.com/a/6581949.
Stepping through the code should really help you understand the order of things.
With Decorators
This functionality could also easily be put into a decorator, as shown below. Again, a steppable/debuggable/runnable version of the code below can be found here http://dbgr.cc/0
#!/usr/bin/env python
def superfy(some_func):
def wrapped(self, *args, **kwargs):
# NOTE might need to be changed when dealing with
# multiple inheritance
base_fn = getattr(self.__class__.__bases__[0], some_func.__name__, lambda *args, **kwargs: None)
# bind the parent class' function and call it
base_fn.__get__(self, self.__class__)(*args, **kwargs)
# call the child class' function
some_func(self, *args, **kwargs)
wrapped.__name__ = "superfy(%s)" % some_func.__name__
return wrapped
class BaseClass(object):
def access(self):
print("in BaseClass access function")
class OtherClass(BaseClass):
#superfy
def access(self):
print("in OtherClass access function")
print("OtherClass attributes")
print("----------------------")
for k,v in OtherClass.__dict__.iteritems():
print("%15s: %r" % (k, v))
print("")
o = OtherClass()
print("Calling access on OtherClass instance")
print("-------------------------------------")
o.access()
The decorator above retrieves the BaseClass' function of the same name, and calls that first before calling the OtherClass' function.
May this simple approach can help.
class App:
def __init__(self, testName):
self.localLog = logging.getLogger(testName)
self.application = None
self.link = None
def access(self):
print('There is something BaseClass must do')
print('The application is ', self.application)
print('The link is ', self.link)
class App1(App):
def __init__(self, testName):
# ...
super(App1, self).__init__(testName)
def access(self):
self.application = 'Application created by App1'
self.link = 'Link created by App1'
super(App1, self).access()
print('There is something App1 must do')
class App2(App):
def __init__(self, testName):
# ...
super(App2, self).__init__(testName)
def access(self):
self.application = 'Application created by App2'
self.link = 'Link created by App2'
super(App2, self).access()
print('There is something App2 must do')
and the test result:
>>>
>>> app = App('Baseclass')
>>> app.access()
There is something BaseClass must do
The application is None
The link is None
>>> app1 = App1('App1 test')
>>> app1.access()
There is something BaseClass must do
The application is Application created by App1
The link is Link created by App1
There is something App1 must do
>>> app2 = App2('App2 text')
>>> app2.access()
There is something BaseClass must do
The application is Application created by App2
The link is Link created by App2
There is something App2 must do
>>>
Adding a combine function we can combine two functions and execute them one after other as bellow
def combine(*fun):
def new(*s):
for i in fun:
i(*s)
return new
class base():
def x(self,i):
print 'i',i
class derived(base):
def x(self,i):
print 'i*i',i*i
x=combine(base.x,x)
new_obj=derived():
new_obj.x(3)
Output Bellow
i 3
i*i 9
it need not be single level hierarchy it can have any number of levels or nested

Injecting function call after __init__ with decorator

I'm trying to find the best way to create a class decorator that does the following:
Injects a few functions into the decorated class
Forces a call to one of these functions AFTER the decorated class' __init__ is called
Currently, I'm just saving off a reference to the 'original' __init__ method and replacing it with my __init__ that calls the original and my additional function. It looks similar to this:
orig_init = cls.__init__
def new_init(self, *args, **kwargs):
"""
'Extend' wrapped class' __init__ so we can attach to all signals
automatically
"""
orig_init(self, *args, **kwargs)
self._debugSignals()
cls.__init__ = new_init
Is there a better way to 'augment' the original __init__ or inject my call somewhere else? All I really need is for my self._debugSignals() to be called sometime after the object is created. I also want it happen automatically, which is why I thought after __init__ was a good place.
Extra misc. decorator notes
It might be worth mentioning some background on this decorator. You can find the full code here. The point of the decorator is to automatically attach to any PyQt signals and print when they are emitted. The decorator works fine when I decorate my own subclasses of QtCore.QObject, however I've been recently trying to automatically decorate all QObject children.
I'd like to have a 'debug' mode in the application where I can automatically print ALL signals just to make sure things are doing what I expect. I'm sure this will result in TONS of debug, but I'd still like to see what's happening.
The problem is my current version of the decorator is causing a segfault when replacing QtCore.QObject.__init__. I've tried to debug this, but the code is all SIP generated, which I don't have much experience with.
So, I was wondering if there was a safer, more pythonic way to inject a function call AFTER the __init__ and hopefully avoid the segfault.
Based on this post and this answer, an alternative way to do this is through a custom metaclass. This would work as follows (tested in Python 2.7):
# define a new metaclass which overrides the "__call__" function
class NewInitCaller(type):
def __call__(cls, *args, **kwargs):
"""Called when you call MyNewClass() """
obj = type.__call__(cls, *args, **kwargs)
obj.new_init()
return obj
# then create a new class with the __metaclass__ set as our custom metaclass
class MyNewClass(object):
__metaclass__ = NewInitCaller
def __init__(self):
print "Init class"
def new_init(self):
print "New init!!"
# when you create an instance
a = MyNewClass()
>>> Init class
>>> New init!!
The basic idea is that:
when you call MyNewClass() it searches for the metaclass, finds that you have defined NewInitCaller
The metaclass __call__ function is called.
This function creates the MyNewClass instance using type,
The instance runs its own __init__ (printing "Init class").
The meta class then calls the new_init function of the instance.
Here is the solution for Python 3.x, based on this post's accepted answer. Also see PEP 3115 for reference, I think the rationale is an interesting read.
Changes in the example above are shown with comments; the only real change is the way the metaclass is defined, all other are trivial 2to3 modifications.
# define a new metaclass which overrides the "__call__" function
class NewInitCaller(type):
def __call__(cls, *args, **kwargs):
"""Called when you call MyNewClass() """
obj = type.__call__(cls, *args, **kwargs)
obj.new_init()
return obj
# then create a new class with the metaclass passed as an argument
class MyNewClass(object, metaclass=NewInitCaller): # added argument
# __metaclass__ = NewInitCaller this line is removed; would not have effect
def __init__(self):
print("Init class") # function, not command
def new_init(self):
print("New init!!") # function, not command
# when you create an instance
a = MyNewClass()
>>> Init class
>>> New init!!
Here's a generalized form of jake77's example which implements __post_init__ on a non-dataclass. This enables a subclass's configure() to be automatically invoked in correct sequence after the base & subclass __init__s have completed.
# define a new metaclass which overrides the "__call__" function
class PostInitCaller(type):
def __call__(cls, *args, **kwargs):
"""Called when you call BaseClass() """
print(f"{__class__.__name__}.__call__({args}, {kwargs})")
obj = type.__call__(cls, *args, **kwargs)
obj.__post_init__(*args, **kwargs)
return obj
# then create a new class with the metaclass passed as an argument
class BaseClass(object, metaclass=PostInitCaller):
def __init__(self, *args, **kwargs):
print(f"{__class__.__name__}.__init__({args}, {kwargs})")
super().__init__()
def __post_init__(self, *args, **kwargs):
print(f"{__class__.__name__}.__post_init__({args}, {kwargs})")
self.configure(*args, **kwargs)
def configure(self, *args, **kwargs):
print(f"{__class__.__name__}.configure({args}, {kwargs})")
class SubClass(BaseClass):
def __init__(self, *args, **kwargs):
print(f"{__class__.__name__}.__init__({args}, {kwargs})")
super().__init__(*args, **kwargs)
def configure(self, *args, **kwargs):
print(f"{__class__.__name__}.configure({args}, {kwargs})")
super().configure(*args, **kwargs)
# when you create an instance
a = SubClass('a', b='b')
running gives:
PostInitCaller.__call__(('a',), {'b': 'b'})
SubClass.__init__(('a',), {'b': 'b'})
BaseClass.__init__(('a',), {'b': 'b'})
BaseClass.__post_init__(('a',), {'b': 'b'})
SubClass.configure(('a',), {'b': 'b'})
BaseClass.configure(('a',), {'b': 'b'})
I know that the metaclass approach is the Pro way, but I've a more readable and easy proposal using #staticmethod:
class Invites(TimestampModel, db.Model):
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
invitee_email = db.Column(db.String(128), nullable=False)
def __init__(self, invitee_email):
invitee_email = invitee_email
#staticmethod
def create_invitation(invitee_email):
"""
Create an invitation
saves it and fetches it because the id
is being generated in the DB
"""
invitation = Invites(invitee_email)
db.session.save(invitation)
db.session.commit()
return Invites.query.filter(
PartnerInvites.invitee_email == invitee_email
).one_or_none()
So I could use it this way:
invitation = Invites.create_invitation("jim#mail.com")
print(invitation.id, invitation.invitee_email)
>>>> 1 jim#mail.com

Categories

Resources