So I am trying to create a class with an initialization method that needs to get the type of the object being created in order to properly set the default values of the init arguments.
To give a concrete example with code, say I have the following class:
def __init__(self, foo, bar=type(self).some_class_variable, ham=type(self).some_other_class_variable):
self.foo = foo
self.bar = bar
self.ham = self.some_function(ham)
This is the functionality I am looking for, but my IDE is saying "self" is not defined which I can understand, as self has yet to be instantiated. So my question is how would I go about implementing this properly? I do not want to "hardcode" the class type in where I currently have type(self) because subclasses of this class may have their own values for some_class_variable and some_other_class_variable and I want those subclasses to use the class variable corresponding to their type.
My solution needs to work in Python 3.6 & Python 3.7 but I would really prefer to find a solution that works in all versions of Python 3.6 and later if possible.
I think that you should put it in the body, not the parameter.
def __init__(self, foo):
self.foo = foo
self.bar = type(self).some_class_variable
self.ham = self.some_function(type(self).some_other_class_variable)
EDIT:
If the values are defaults, you can do this:
default_value = 'default pls'
def __init__(self, foo, bar=default_value, ham=default_value):
self.foo = foo
if default_value == bar:
self.bar = type(self).some_class_variable
if default_value == ham:
self.ham = self.some_function(type(self).some_other_class_variable)
The class name is not bound yet, as the class has not been initialised at that point. See this answer which explains this in more depth.
A way to work-around this, albeit not that great, is to create setter methods for the variables, and to set them after an instance has been initialised, like so:
class Example:
def __init__(self, foo):
self.foo = foo
self.bar = None
self.ham = None
def set_bar(self, bar):
self.bar = bar
def set_ham(self, ham):
self.ham = ham
You can go one step further to validate the type of those attributes with a simple if statement, or through python 'typing'.
Related
I was hoping that someone could give me guidance on some of the approaches I've tried for mocking. I'm really trying to understand what the best method is for this general case (I think general enough). And whether or not there are better approaches or if my current approaches need some tweaking.
This is my current setup. I have three classes that are instantiated in the class being tested. These classes have methods that I need to create this function I'm testing.
class Foo:
def get_foo(self): # Method to be used in a Generator method
...
class Bar:
def get_bar(self): # Method to be used in a Generator method
...
class Baz:
def get_baz(self): # Method to be used in a Generator method
...
# Class being tested
class Generator:
def __init__(self):
self.foo = Foo()
self.bar = Bar()
self.baz = Baz()
...rest of Generator...
I am trying to mock the classes under __init__ in class Generator before I create an instance of Generator. I want to do this because classes like Foo and Bar call mongoDB to fetch some data. I don't want it to actually spin up a mongo instance during testing, so I'm trying to mock it out before that happens.
The approaches I am listing below have been successful in my case. But like I said before, I'm really trying to understand what the best approach is.
Approach 1
I believe this approaches the goal correctly. It would mock everything before instantiated, but it seems inefficient to keep doing return_value.get_value..... I have not been able to find a simplified version of this that has actually worked.
class TestGenerator(unittest.TestCase):
#patch('generatorModule.Foo')
#patch('generatorModule.Bar')
#patch('generatorModule.Baz')
def setUpClass(cls, patchFoo, patchBar, patchBaz):
cls.foo = patchFoo.return_value
cls.foo.get_foo.return_value.get_value.return_value = 'Some string I want to return'
cls.bar = patchBar.return_value
cls.bar.get_bar.return_value.get_value.return_value = 'Another string I want to return'
cls.baz = patchBaz.return_value
cls.baz.get_baz.return_value.get_value.return_value = 'Another another string I want to return'
cls.generator = Generator()
...rest of test...
Approach 2
I found this one through some research into mocking. It essentially returns None for the __init__ function in Generator. Then I can mock the attributes I want using Magic Mock. This also works, but it seems like a cop-out setting everything from scratch for each test. It works, and I understand how it works, but I don't know if this is a valid solution. I'm not sure what problems may occur if I use this approach for all of my tests going forward.
class TestGenerator(unittest.TestCase):
#patch.object(GeneratClass, "__init__", Mock(return_value=None))
def setUpClass(cls):
mockFoo = MagicMock()
mockFoo.get_foo.return_value = 'Some string I want to return'
mockBar = MagicMock()
mockBar.get_bar.return_value = 'Another string I want to return'
mockBaz = MagicMock()
mockBaz.get_baz.return_value = 'Another another string I want to return'
cls.generator = Generator()
cls.generator.foo = mockFoo
cls.generator.bar = mockBar
cls.generator.baz = mockBaz
...rest of test...
Any notes on my current approaches, or what might be the best approach would be amazing. The version of Python that I am using is 3.9.6. Thank you for any help you can give!
Have you thought about using dependency injection here?
Instead of initializing Foo, Bar and Baz Inside Generator.__init__(), you can pass instances of those classes to the function.
class Generator:
def __init__(self, foo: Foo, bar: Bar, baz: Baz):
self.foo = foo
self.bar = bar
self.baz = baz
...rest of Generator...
generator = Generator(Foo(), Bar(), Baz())
Then, in your test you could initialize the class using your mocks instead of the original classes.
If you don’t want to pass these instances every time you initialize Generator, you could either give the arguments a default value:
class Generator:
def __init__(self, foo: Optional[Foo] = None, bar: Optional[Bar] = None, baz: Optional[Baz] = None):
self.foo = foo or Foo()
self.bar = bar or Bar()
self.baz = baz or Baz()
...rest of Generator...
generator = Generator()
Or, the method I prefer, create a classmethod:
class Generator:
def __init__(self, foo: Foo, bar: Bar, baz: Baz):
self.foo = foo
self.bar = bar
self.baz = baz
#classmethod
def auto_initialized(cls):
return cls(Foo(), Bar(), Baz())
...rest of Generator...
generator = Generator.auto_initialized()
So I'm refactoring my code to be more Pythonic - specifically I've learned that using explicit getters and setters should be replaced with #property. My case is that i have an Example class with initialized bar attribute (initialization helps me to know that user set the bar):
class Example:
def __init__(self):
self.bar = 'initializedValue'
#property
def bar(self):
return self._bar
#bar.setter
def bar(self, b):
self._bar = b
def doIfBarWasSet():
if self.bar != 'initializedValue':
pass
else:
pass
after running foo = Example() my debugger shows that foo has two attributes: _bar and bar, both set to 'initializedValue'. Also, when I run foo.bar = 'changedValue' or foo._bar = 'changedValue', both of them are changed to 'changedValue'. Why there are two attributes? Isn't that redundant? I think I understand why there is _bar attribute - I added it in #bar.setter, but why there is bar as an string attribute? Shouldn't bar be rather a method leading to bar #property?
It's fine. Keep in mind that bar is not an instance attribute, but a class attribute. Since it has type property, it implements the descriptor protocol so that its behavior is different when accessed from an instance. If e is an instance of Example, then e.bar does not give you the instance of property assigned to Example.bar; it gives you the result of Example.bar.__get__(e, Example) (which in this case, happens to be Example.bar.fget(e), where fget is the original function decorated by #property).
In short, every instance has its own _bar attribute, but access to that attribute is mediated by the class attribute Example.bar.
It's easier to see that bar is a class attribute if you write this minimal (and sufficient, since neither the getter nor setter in this case requires a def statement) definition.
class Example:
def __init__(self):
self.bar = "initalizedValue"
bar = property(lambda self: self._bar, lambda self, b: setattr(self, '_bar', b))
or more generally
def bar_getter(self):
return self._bar
def bar_setter(self, b):
self._bar = b
class Example:
def __init__(self):
self.bar = "initalizedValue"
bar = property(bar_getter, bar_setter)
Is something like this possible?
class Foo:
BAR = Foo("bar")
def __init__(self, name):
self.name = name
Currently this yields NameError: name 'Foo' is not defined.
No. annotations only applies to variable and function annotations. Until the class statement as been completely executed, there is no class Foo to instantiate. You must wait until after Foo is defined to create an instance of it.
class Foo:
def __init__(self, name):
self.name = name
Foo.BAR = Foo("bar")
You can always initialize BAR = None, then change the value of the attribute after the class is defined.
class Foo:
BAR = None # To be a Foo instance once Foo is defined
...
Foo.BAR = Foo("bar") # Fulfilling our earlier promise
That might be desirable for documentation purposes, to make it clearer in the definition that Foo.BAR will exist, though with a different value. I can't think of a situation where that would be necessary, though.
Consider the following code:
class Foo():
pass
Foo.entries = dict()
a = Foo()
a.entries['1'] = 1
b = Foo()
b.entries['3'] = 3
print(a.entries)
This will print:
{'1': 1, '3': 3}
because the entries is added as static attribute. Is there a way monkey patch the class definition in order to add new attributes (without using inheritance).
I managed to find the following way but it looks convoluted to me:
def patch_me(target, field, value):
def func(self):
if not hasattr(self, '__' + field):
setattr(self, '__' + field, value())
return getattr(self, '__' + field)
setattr(target, field, property(func))
patch_me(Foo, 'entries', dict)
Ordinarily, attributes are added either by the __init__() function or after instantiating:
foo = Foo()
foo.bar = 'something' # note case
If you want to do this automatically, inheritance is by far the simplest way to do so:
class Baz(Foo):
def __init__(self):
super().__init__() # super() needs arguments in 2.x
self.bar = 'something'
Note that classes don't need to appear at the top level of a Python module. You can declare a class inside a function:
def make_baz(value):
class Baz(Foo):
def __init__(self):
super().__init__() # super() needs arguments in 2.x
self.bar = value()
return Baz()
This example will create a new class every time make_baz() is called. That may or may not be what you want. It would probably be simpler to just do this:
def make_foo(value):
result = Foo()
result.bar = value()
return result
If you're really set on monkey-patching the original class, the example code you provided is more or less the simplest way of doing it. You might consider using decorator syntax for property(), but that's a minor change. I should also note that it will not invoke double-underscore name mangling, which is probably a good thing because it means you cannot conflict with any names used by the class itself.
I have a class Foo which contains a datamember of type Bar. I can't make a generalized, "default" Bar.__init__() - the Bar object is passed into the Foo.__init__() method.
How do I tell Python that I want a datamember of this type?
class Foo:
# These are the other things I've tried, with their errors
myBar # NameError: name 'myBar' is not defined
Bar myBar # Java style: this is invalid Python syntax.
myBar = None #Assign "None", assign the real value in __init__. Doesn't work
#####
myBar = Bar(0,0,0) # Pass in "default" values.
def __init__(self, theBar):
self.myBar = theBar
def getBar(self):
return self.myBar
This works, when I pass in the "default" values as shown. However, when I call getBar, I do not get back the one I passed in in the Foo.__init__() function - I get the "default" values.
b = Bar(1,2,3)
f = Foo(b)
print f.getBar().a, f.getBar().b, f.getBar().c
This spits out 0 0 0, not 1 2 3, like I'm expecting.
If I don't bother declaring the myBar variable, I get errors in the getBar(self): method (Foo instance has no attribute 'myBar').
What's the correct way to use a custom datamember in my object?
You don't need to tell Python you are going to add a certain data member – just add it. Python is more dynamic than e.g. Java in this regard.
If bar instances are essentially immutable (meaning they are not changed in practice), you can give the default instance as default value of the __init__() parameter:
class Foo:
def __init__(self, the_bar=Bar(0,0,0)):
self.my_bar = the_bar
All Foo instances uisng the default value will share a single Bar instance. If the Bar instance might be changed, this is probably not what you want, and you should use this idiom in this case:
class Foo:
def __init__(self, the_bar=None):
if the_bar is None:
the_bar = Bar(0,0,0)
self.my_bar = the_bar
Note that you shouldn't usually write getters and setters in Python. They are just unnecessary boilerplate code slowing down your application. Since Python supports properties, you also don't need them to be future-proof.
The correct way is to do nothing other than assign it in the constructor.
class Foo:
def __init__(self, bar):
self.bar = bar
def getbar(self):
return self.bar
You definitely don't have to declare bar ahead of time.
It sounds like you want Foo.bar to default to a value if one isn't specified so you might do something like this:
class Foo:
def __init__(self, bar=None):
# one of many ways to construct a new
# default Bar if one isn't provided:
self._bar = bar if bar else Bar(...)
#property
def bar(self):
"""
This isn't necessary but you can provide proper getters and setters
if you prefer.
"""
return self._bar
#bar.setter
def bar(self, newbar):
"""
Example of defining a setter
"""
return self._bar = newbar
Typically just naming the variable appropriately and omitting the setter is considered more more 'pythonic'.
class Foo:
def __init__(self, bar=None):
self.bar = bar if bar else Bar(...)
You don't declare variables in Python, and variables are untyped.
Just do:
class Foo(object):
def __init__(self, bar):
self.bar = bar
def getbar(self):
return self.bar
I suspect that the issue is caused by you using old-style classes, which are kind of odd. If you inherit from object, you get a new-style class, which is designed to be much less surprising.