Call python descriptor's __get__ with additional arguments? - python

Is it possible to pass additional arguments when getting a descriptor?
For example, I'd like to have:
class Element(object):
def __init__(self, text='initial'):
self.text = text
def __get__(self, instance, owner, extra_text=''):
print self.text + instance.name + extra_text
And then be able to use it like:
class MyClass(object):
elem1 = Element()
elem2 = Element(text='override')
def __init__(self, name):
self.name = name
def print_elems(self):
self.elem1
self.elem1(extra_text='extra')
self.elem2
self.elem2(extra_text='extra')
MyClass('name').print_elems()
And then get:
initialname
initialnameextra
overridename
overridenameextra
Is there any way to make this work? I've even tried calling elem1.__get__(self, self.__class__, extra_text='extra) and making extra_text a required param, but couldn't figure out any way to actually provide it?
If not with descriptors, is there an alternative way to achieve this syntax?
Thanks!

In order to call get() with anything but the automatic arguments, you need to access the instance of the descriptor. To do that...
Step 1: add as the first line to get():
if instance is None:
return self
This is a fairly common addition to the method.
Step 2: When accessing the descriptor, get it from the class instead of the instance:
MyClass.elem1
MyClass.elem1
or
type(my_inst).elem1
type(my_inst).elem1
Step 3: Call get () explicitly, passing in the desired argument.
Ugly and verbose; avoid as long as you can.
I've got a book released about descriptors, which can be found at http://www.lulu.com/spotlight/jacobz_20
It can really help you wrap your head around this if you're confused.

Related

Create instances from just a class without anithing else [duplicate]

Is there a way to circumvent the constructor __init__ of a class in python?
Example:
class A(object):
def __init__(self):
print "FAILURE"
def Print(self):
print "YEHAA"
Now I would like to create an instance of A. It could look like this, however this syntax is not correct.
a = A
a.Print()
EDIT:
An even more complex example:
Suppose I have an object C, which purpose it is to store one single parameter and do some computations with it. The parameter, however, is not passed as such but it is embedded in a huge parameter file. It could look something like this:
class C(object):
def __init__(self, ParameterFile):
self._Parameter = self._ExtractParamterFile(ParameterFile)
def _ExtractParamterFile(self, ParameterFile):
#does some complex magic to extract the right parameter
return the_extracted_parameter
Now I would like to dump and load an instance of that object C. However, when I load this object, I only have the single variable self._Parameter and I cannot call the constructor, because it is expecting the parameter file.
#staticmethod
def Load(file):
f = open(file, "rb")
oldObject = pickle.load(f)
f.close()
#somehow create newObject without calling __init__
newObject._Parameter = oldObject._Parameter
return newObject
In other words, it is not possible to create an instance without passing the parameter file. In my "real" case, however, it is not a parameter file but some huge junk of data I certainly not want to carry around in memory or even store it to disc.
And since I want to return an instance of C from the method Load I do somehow have to call the constructor.
OLD EDIT:
A more complex example, which explains why I am asking the question:
class B(object):
def __init__(self, name, data):
self._Name = name
#do something with data, but do NOT save data in a variable
#staticmethod
def Load(self, file, newName):
f = open(file, "rb")
s = pickle.load(f)
f.close()
newS = B(???)
newS._Name = newName
return newS
As you can see, since data is not stored in a class variable I cannot pass it to __init__. Of course I could simply store it, but what if the data is a huge object, which I do not want to carry around in memory all the time or even save it to disc?
You can circumvent __init__ by calling __new__ directly. Then you can create a object of the given type and call an alternative method for __init__. This is something that pickle would do.
However, first I'd like to stress very much that it is something that you shouldn't do and whatever you're trying to achieve, there are better ways to do it, some of which have been mentioned in the other answers. In particular, it's a bad idea to skip calling __init__.
When objects are created, more or less this happens:
a = A.__new__(A, *args, **kwargs)
a.__init__(*args, **kwargs)
You could skip the second step.
Here's why you shouldn't do this: The purpose of __init__ is to initialize the object, fill in all the fields and ensure that the __init__ methods of the parent classes are also called. With pickle it is an exception because it tries to store all the data associated with the object (including any fields/instance variables that are set for the object), and so anything that was set by __init__ the previous time would be restored by pickle, there's no need to call it again.
If you skip __init__ and use an alternative initializer, you'd have a sort of a code duplication - there would be two places where the instance variables are filled in, and it's easy to miss one of them in one of the initializers or accidentally make the two fill the fields act differently. This gives the possibility of subtle bugs that aren't that trivial to trace (you'd have to know which initializer was called), and the code will be more difficult to maintain. Not to mention that you'd be in an even bigger mess if you're using inheritance - the problems will go up the inheritance chain, because you'd have to use this alternative initializer everywhere up the chain.
Also by doing so you'd be more or less overriding Python's instance creation and making your own. Python already does that for you pretty well, no need to go reinventing it and it will confuse people using your code.
Here's what to best do instead: Use a single __init__ method that is to be called for all possible instantiations of the class that initializes all instance variables properly. For different modes of initialization use either of the two approaches:
Support different signatures for __init__ that handle your cases by using optional arguments.
Create several class methods that serve as alternative constructors. Make sure they all create instances of the class in the normal way (i.e. calling __init__), as shown by Roman Bodnarchuk, while performing additional work or whatever. It's best if they pass all the data to the class (and __init__ handles it), but if that's impossible or inconvenient, you can set some instance variables after the instance was created and __init__ is done initializing.
If __init__ has an optional step (e.g. like processing that data argument, although you'd have to be more specific), you can either make it an optional argument or make a normal method that does the processing... or both.
Use classmethod decorator for your Load method:
class B(object):
def __init__(self, name, data):
self._Name = name
#store data
#classmethod
def Load(cls, file, newName):
f = open(file, "rb")
s = pickle.load(f)
f.close()
return cls(newName, s)
So you can do:
loaded_obj = B.Load('filename.txt', 'foo')
Edit:
Anyway, if you still want to omit __init__ method, try __new__:
>>> class A(object):
... def __init__(self):
... print '__init__'
...
>>> A()
__init__
<__main__.A object at 0x800f1f710>
>>> a = A.__new__(A)
>>> a
<__main__.A object at 0x800f1fd50>
Taking your question literally I would use meta classes :
class MetaSkipInit(type):
def __call__(cls):
return cls.__new__(cls)
class B(object):
__metaclass__ = MetaSkipInit
def __init__(self):
print "FAILURE"
def Print(self):
print "YEHAA"
b = B()
b.Print()
This can be useful e.g. for copying constructors without polluting the parameter list.
But to do this properly would be more work and care than my proposed hack.
Not really. The purpose of __init__ is to instantiate an object, and by default it really doesn't do anything. If the __init__ method is not doing what you want, and it's not your own code to change, you can choose to switch it out though. For example, taking your class A, we could do the following to avoid calling that __init__ method:
def emptyinit(self):
pass
A.__init__ = emptyinit
a = A()
a.Print()
This will dynamically switch out which __init__ method from the class, replacing it with an empty call. Note that this is probably NOT a good thing to do, as it does not call the super class's __init__ method.
You could also subclass it to create your own class that does everything the same, except overriding the __init__ method to do what you want it to (perhaps nothing).
Perhaps, however, you simply wish to call the method from the class without instantiating an object. If that is the case, you should look into the #classmethod and #staticmethod decorators. They allow for just that type of behavior.
In your code you have put the #staticmethod decorator, which does not take a self argument. Perhaps what may be better for the purpose would a #classmethod, which might look more like this:
#classmethod
def Load(cls, file, newName):
# Get the data
data = getdata()
# Create an instance of B with the data
return cls.B(newName, data)
UPDATE: Rosh's Excellent answer pointed out that you CAN avoid calling __init__ by implementing __new__, which I was actually unaware of (although it makes perfect sense). Thanks Rosh!
I was reading the Python cookbook and there's a section talking about this: the example is given using __new__ to bypass __init__()
>>> class A:
def __init__(self,a):
self.a = a
>>> test = A('a')
>>> test.a
'a'
>>> test_noinit = A.__new__(A)
>>> test_noinit.a
Traceback (most recent call last):
File "", line 1, in
test_noinit.a
AttributeError: 'A' object has no attribute 'a'
>>>
However I think this only works in Python3. Below is running under 2.7
>>> class A:
def __init__(self,a):
self.a = a
>>> test = A.__new__(A)
Traceback (most recent call last):
File "", line 1, in
test = A.__new__(A)
AttributeError: class A has no attribute '__new__'
>>>
As I said in my comment you could change your __init__ method so that it allows creation without giving any values to its parameters:
def __init__(self, p0, p1, p2):
# some logic
would become:
def __init__(self, p0=None, p1=None, p2=None):
if p0 and p1 and p2:
# some logic
or:
def __init__(self, p0=None, p1=None, p2=None, init=True):
if init:
# some logic

Safest way to remove argument order and provide default values at the same time in Python

I am trying to write Python 2.7 cod that is easier to scale by removing argument order while providing default values in the case that requirements change. Here is my code:
# Class:
class Mailer(object):
def __init__(self, **args):
self.subject=args.get('subject', None)
self.mailing_list=args.get('mailing_list', None)
self.from_address=args.get('from_address', None)
self.password=args.get('password', None)
self.sector=args.get('sector', "There is a problem with the HTML")
# call:
mailer=Mailer(
subject="Subject goes here",
password="password",
mailing_list=("email#email.com", "email#email.com","email#email.com"),
mailing_list=("email#email.com", "email#email.com"),
from_address="email#email.com",
sector=Sector()
)
I'm still new to the language so if there is a better way to achieve this, I'd really like to know. Thanks in advance.
Try this way of initialize your class:
class Mailer(object):
def __init__(self, **args):
for k in args:
self.__dict__[k] = args[k]
The problem with the way you're doing it is that there is no documentation about what arguments the class accepts, so help(Mailer) is useless. What you should do is provide default argument values in the __init__() method where possible.
To set the arguments as attributes on the instance, you can use a little introspection, as in another answer I wrote, to avoid all the boilerplace self.foo = foo stuff.
class Mailer(object):
def __init__(self, subject="None", mailing_list=(),
from_address="noreply#example.com", password="hunter42",
sector="There is a problem with the HTML"):
# set all arguments as attributes of this instance
code = self.__init__.__func__.func_code
argnames = code.co_varnames[1:code.co_argcount]
locs = locals()
self.__dict__.update((name, locs[name]) for name in argnames)
You can provide the arguments in any order if you make the call using explicit argument names, regardless of how they're defined in the method, so your example call will still work.

Using a base class function that takes parameters as a decorator for derived class function

I feel like I have a pretty good grasp on using decorators when dealing with regular functions, but between using methods of base classes for decorators in derived classes, and passing parameters to said decorators, I cannot figure out what to do next.
Here is a snippet of code.
class ValidatedObject:
...
def apply_validation(self, field_name, code):
def wrap(self, f):
self._validations.append(Validation(field_name, code, f))
return f
return wrap
class test(ValidatedObject):
....
#apply_validation("_name", "oh no!")
def name_validation(self, name):
return name == "jacob"
If I try this as is, I get an "apply_validation" is not found.
If I try it with #self.apply_validation I get a "self" isn't found.
I've also been messing around with making apply_validation a class method without success.
Would someone please explain what I'm doing wrong, and the best way to fix this? Thank you.
The issue you're having is that apply_validation is a method, which means you need to call it on an instance of ValidatedObject. Unfortunately, at the time it is being called (during the definition of the test class), there is no appropriate instance available. You need a different approach.
The most obvious one is to use a metaclass that searches through its instance dictionaries (which are really class dictionaries) and sets up the _validations variable based on what it finds. You can still use a decorator, but it probably should be a global function, or perhaps a static method, and it will need to work differently. Here's some code, that uses a metaclass and a decorator that adds function attributes:
class ValidatedMeta(type):
def __new__(meta, name, bases, dct):
validations = [Validation(f._validation_field_name, f._validation_code, f)
for f in dct.values if hasattr(f._validation_field_name)]
dct["_validations"] = validations
super(ValidatedMeta, meta).__new__(meta, name, bases, dct)
def apply_validation(field_name, code):
def decorator(f):
f._validation_field_name = field_name
f._validation_code = code
return f
return decorator
def ValidatedObject(metaclass=ValidatedMeta):
pass
class test(ValidatedObject):
#apply_validation("_name", "oh no!")
def name_validation(self, name):
return name == "jacob"
After this code runs, test._validations will be [Validation("_name", "oh no!", test.name_validation)]. Note that the method that is be passed to Validation is unbound, so you'll need to pass it a self argument yourself when you call it (or perhaps drop the self argument and change the decorator created in apply_validation to return staticmethod(f)).
This code may not do what you want if you have validation methods defined at several levels of an inheritance hierarchy. The metaclass as written above only checks the immediate class's dict for methods with the appropriate attributes. If you need it include inherited methods in _validations too, you may need to modify the logic in ValidatedMeta.__new__. Probably the easiest way to go is to look for _validations attributes in the bases and concatenate the lists together.
Just an example for using decorators on class method:
from functools import wraps
def VALIDATE(dec):
#wraps(dec)
def _apply_validation(self, name):
self.validate(name)
return dec(self, name)
return _apply_validation
class A:
def validate(self, name):
if name != "aamir":
raise Exception, 'Invalid name "%s"' % name
class B(A):
#VALIDATE
def name_validation(self, name):
return name
b = B()
b.name_validation('jacob') # should raise exception

do's and don'ts of __init__ method

I was just wondering if it's considered wildly inappropriate, just messy, or unconventional at all to use the init method to set variables by calling, one after another, the rest of the functions within a class. I have done things like, self.age = ch_age(), where ch_age is a function within the same class, and set more variables the same way, like self.name=ch_name() etc. Also, what about prompting for user input within init specifically to get the arguments with which to call ch_age? The latter feels a little wrong I must say. Any advice, suggestions, admonishments welcome!
I always favor being lazy: if you NEED to initialize everything in the constructor, you should--in a lot of cases, I put a general "reset" method in my class. Then you can call that method in init, and can re-initialize the class instance easily.
But if you don't need those variables initially, I feel it's better to wait to initialize things until you actually need them.
For your specific case
class Blah1(object):
def __init__(self):
self.name=self.ch_name()
def ch_name(self):
return 'Ozzy'
you might as well use the property decorator. The following will have the same effect:
class Blah2(object):
def __init__(self):
pass
#property
def name():
return 'Ozzy'
In both of the implementations above, the following code should not issue any exceptions:
>>> b1 = Blah1()
>>> b2 = Blah2()
>>> assert b1.name == 'Ozzy'
>>> assert b2.name == 'Ozzy'
If you wanted to provide a reset method, it might look something like this:
class Blah3(object):
def __init__(self, name):
self.reset(name)
def reset(self, name):
self.name = name

Python - how do I force the use of a factory method to instantiate an object?

I have a set of related classes that all inherit from one base class. I would like to use a factory method to instantiate objects for these classes. I want to do this because then I can store the objects in a dictionary keyed by the class name before returning the object to the caller. Then if there is a request for an object of a particular class, I can check to see whether one already exists in my dictionary. If not, I'll instantiate it and add it to the dictionary. If so, then I'll return the existing object from the dictionary. This will essentially turn all the classes in my module into singletons.
I want to do this because the base class that they all inherit from does some automatic wrapping of the functions in the subclasses, and I don't want to the functions to get wrapped more than once, which is what happens currently if two objects of the same class are created.
The only way I can think of doing this is to check the stacktrace in the __init__() method of the base class, which will always be called, and to throw an exception if the stacktrace does not show that the request to make the object is coming from the factory function.
Is this a good idea?
Edit: Here is the source code for my base class. I've been told that I need to figure out metaclasses to accomplish this more elegantly, but this is what I have for now. All Page objects use the same Selenium Webdriver instance, which is in the driver module imported at the top. This driver is very expensive to initialize -- it is initialized the first time a LoginPage is created. After it is initialized the initialize() method will return the existing driver instead of creating a new one. The idea is that the user must begin by creating a LoginPage. There will eventually be dozens of Page classes defined and they will be used by unit testing code to verify that the behavior of a website is correct.
from driver import get_driver, urlpath, initialize
from settings import urlpaths
class DriverPageMismatchException(Exception):
pass
class URLVerifyingPage(object):
# we add logic in __init__() to check the expected urlpath for the page
# against the urlpath that the driver is showing - we only want the page's
# methods to be invokable if the driver is actualy at the appropriate page.
# If the driver shows a different urlpath than the page is supposed to
# have, the method should throw a DriverPageMismatchException
def __init__(self):
self.driver = get_driver()
self._adjust_methods(self.__class__)
def _adjust_methods(self, cls):
for attr, val in cls.__dict__.iteritems():
if callable(val) and not attr.startswith("_"):
print "adjusting:"+str(attr)+" - "+str(val)
setattr(
cls,
attr,
self._add_wrapper_to_confirm_page_matches_driver(val)
)
for base in cls.__bases__:
if base.__name__ == 'URLVerifyingPage': break
self._adjust_methods(base)
def _add_wrapper_to_confirm_page_matches_driver(self, page_method):
def _wrapper(self, *args, **kwargs):
if urlpath() != urlpaths[self.__class__.__name__]:
raise DriverPageMismatchException(
"path is '"+urlpath()+
"' but '"+urlpaths[self.__class.__name__]+"' expected "+
"for "+self.__class.__name__
)
return page_method(self, *args, **kwargs)
return _wrapper
class LoginPage(URLVerifyingPage):
def __init__(self, username=username, password=password, baseurl="http://example.com/"):
self.username = username
self.password = password
self.driver = initialize(baseurl)
super(LoginPage, self).__init__()
def login(self):
driver.find_element_by_id("username").clear()
driver.find_element_by_id("username").send_keys(self.username)
driver.find_element_by_id("password").clear()
driver.find_element_by_id("password").send_keys(self.password)
driver.find_element_by_id("login_button").click()
return HomePage()
class HomePage(URLVerifyingPage):
def some_method(self):
...
return SomePage()
def many_more_methods(self):
...
return ManyMorePages()
It's no big deal if a page gets instantiated a handful of times -- the methods will just get wrapped a handful of times and a handful of unnecessary checks will take place, but everything will still work. But it would be bad if a page was instantiated dozens or hundreds or tens of thousands of times. I could just put a flag in the class definition for each page and check to see if the methods have already been wrapped, but I like the idea of keeping the class definitions pure and clean and shoving all the hocus-pocus into a deep corner of my system where no one can see it and it just works.
In Python, it's almost never worth trying to "force" anything. Whatever you come up with, someone can get around it by monkeypatching your class, copying and editing the source, fooling around with bytecode, etc.
So, just write your factory, and document that as the right way to get an instance of your class, and expect anyone who writes code using your classes to understand TOOWTDI, and not violate it unless she really knows what she's doing and is willing to figure out and deal with the consequences.
If you're just trying to prevent accidents, rather than intentional "misuse", that's a different story. In fact, it's just standard design-by-contract: check the invariant. Of course at this point, SillyBaseClass is already screwed up, and it's too late to repair it, and all you can do is assert, raise, log, or whatever else is appropriate. But that's what you want: it's a logic error in the application, and the only thing to do is get the programmer to fix it, so assert is probably exactly what you want.
So:
class SillyBaseClass:
singletons = {}
class Foo(SillyBaseClass):
def __init__(self):
assert self.__class__ not in SillyBaseClass.singletons
def get_foo():
if Foo not in SillyBaseClass.singletons:
SillyBaseClass.singletons[Foo] = Foo()
return SillyBaseClass.singletons[Foo]
If you really do want to stop things from getting this far, you can check the invariant earlier, in the __new__ method, but unless "SillyBaseClass got screwed up" is equivalent to "launch the nukes", why bother?
it sounds like you want to provide a __new__ implementation: Something like:
class MySingledtonBase(object):
instance_cache = {}
def __new__(cls, arg1, arg2):
if cls in MySingletonBase.instance_cache:
return MySingletonBase.instance_cache[cls]
self = super(MySingletonBase, cls).__new__(arg1, arg2)
MySingletonBase.instance_cache[cls] = self
return self
Rather than adding complex code to catch mistakes at runtime, I'd first try to use convention to guide users of your module to do the right thing on their own.
Give your classes "private" names (prefixed by an underscore), give them names that suggest they shouldn't be instantiated (eg _Internal...) and make your factory function "public".
That is, something like this:
class _InternalSubClassOne(_BaseClass):
...
class _InternalSubClassTwo(_BaseClass):
...
# An example factory function.
def new_object(arg):
return _InternalSubClassOne() if arg == 'one' else _InternalSubClassTwo()
I'd also add docstrings or comments to each class, like "Don't instantiate this class by hand, use the factory method new_object."
You can also just nest classes in factory method, as described here:
https://python-3-patterns-idioms-test.readthedocs.io/en/latest/Factory.html#preventing-direct-creation
Working example from mentioned source:
# Factory/shapefact1/NestedShapeFactory.py
import random
class Shape(object):
types = []
def factory(type):
class Circle(Shape):
def draw(self): print("Circle.draw")
def erase(self): print("Circle.erase")
class Square(Shape):
def draw(self): print("Square.draw")
def erase(self): print("Square.erase")
if type == "Circle": return Circle()
if type == "Square": return Square()
assert 0, "Bad shape creation: " + type
def shapeNameGen(n):
for i in range(n):
yield factory(random.choice(["Circle", "Square"]))
# Circle() # Not defined
for shape in shapeNameGen(7):
shape.draw()
shape.erase()
I'm not fan of this solution, just want to add this as one more option.

Categories

Resources