Edit: this is unfortunately not answered in What is the difference between __init__ and __call__ in Python?
class OAuth2Bearer(requests.auth.AuthBase):
def __init__(self, api_key, access_token):
self._api_key = api_key
self._access_token = access_token
def __call__(self, r):
r.headers['Api-Key'] = self._api_key
r.headers['Authorization'] = "Bearer {}".format(self._access_token)
return r
#############
class AllegroAuthHandler(object):
def apply_auth(self):
return OAuth2Bearer(self._api_key, self.access_token) # what will happen here?
I read about __init__ and __call__, but I still don't undestand what is going on in this code
I don't understand:
1.) Which method will be called, __init__ or __call__
2.) If __init__, then __init__ doesn't return anything
3.) If __call__, then __call__ can't be called with two parameters
I think __init__ should be called, because we have X(), not x() from example below as in this answer:
x = X() # __init__ (constructor)
x() # __call__
I believe this is what you're looking for.
The behaviour of calling an object in Python is governed by its type's __call__, so this:
OAuth2Bearer(args)
Is actually this:
type(OAuth2Bearer).__call__(OAuth2Bearer, args)
What is the type of OAuth2Bearer, also called its "metaclass"? If not type, the default, then a subclass of type (this is strictly enforced by Python). From the link above:
If we ignore error checking for a minute, then for regular class instantiation this is roughly equivalent to:
def __call__(obj_type, *args, **kwargs):
obj = obj_type.__new__(*args, **kwargs)
if obj is not None and issubclass(obj, obj_type):
obj.__init__(*args, **kwargs)
return obj
So the result of the call is the result of object.__new__ after passed to object.__init__. object.__new__ basically just allocates space for a new object and is the only way of doing so AFAIK. To call OAuth2Bearer.__call__, you would have to call the instance:
OAuth2Bearer(init_args)(call_args)
I'd say it's neither here.
The part of code that's causing confusion is
OAuth2Bearer(self._api_key, self.access_token)
You need to know one thing: While OAuth2Bearer is the name of a class, it's also an object of class type (a built-in class). So when you write the above line, what's actually called is
type.__call__()
This can be easily verified if you try this code:
print(repr(OAuth2Bearer.__call__))
it will return something like this:
<method-wrapper '__call__' of type object at 0x12345678>
What type.__call__ does and returns is well covered in other questions: It calls OAuth2Bearer.__new__() to create an object, and then initialize that object with obj.__init__(), and returns that object.
You can think of the content of OAuth2Bearer(self._api_key, self.access_token) like this (pseudo-code for illustration purposes)
OAuth2Bearer(self._api_key, self.access_token):
obj = OAuth2Bearer.__new__(OAuth2Bearer, self._api_key, self.access_token)
obj.__init__()
return obj
__init__() is called when used with Class
__call__() is called when used with object of Class
Related
The below Python fails for some reason.
class NetVend:
def blankCallback(data):
pass
def sendCommand(command, callback=NetVend.blankCallback):
return NetVend.sendSignedCommand(command, NetVend.signCommand(command), callback)
def sendSignedCommand(command, signature, callback):
pass
I get the following error:
Traceback (most recent call last):
File "module.py", line 1, in <module>
class NetVend:
File "module.py", line 5, in NetVend
def sendCommand(command, callback=NetVend.blankCallback):
NameError: name 'NetVend' is not defined
You cannot refer to a class name while still defining it.
The class body is executed as a local namespace; you can refer to functions and attributes as local names instead.
Moreover, default values to function keyword parameters are bound at definition time, not when the method is called. Use None as a sentinel instead.
Instead of:
def sendCommand(command, callback=NetVend.blankCallback):
return NetVend.sendSignedCommand(command, NetVend.signCommand(command), callback)
use:
def sendCommand(command, callback=None):
if callback is None:
callback = NetVend.blankCallback
return NetVend.sendSignedCommand(command, NetVend.signCommand(command), callback)
You probably wanted to use the class as a factory for instances instead of as a namespace for what are essentially functions. Even if you only used one instance (a singleton) there are benefits in actually creating an instance first.
Well, I wouldn't say the first, but the second option is certainly true :-)
The trouble is that the default argument is evaluated at compile time, but at that point NetVend does not exist in that scope, because (obviously) the class itself has not yet been fully evaluated.
The way round it is to set the default to None, and check within the method:
def sendCommand(command, callback=None):
if callback is None:
callback=NetVend.blankCallback
I'm trying to understand the arguments that are passed to a pyramid view function.
The following example demonstrates a function wrapped with two different wrapppers. The only difference between the two wrappers is the signature. In the first wrapper, the first positional argument (obj) is explicit. In the second, it is included in *args.
import functools
from pyramid.config import Configurator
import webtest
def decorator_1(func):
#functools.wraps(func)
def wrapper(obj, *args, **kwargs): # <- obj
print('decorator_1')
print(type(obj), obj)
print(args)
print(kwargs)
return func(obj, *args, **kwargs) # <- obj
wrapper.__wrapped__ = func
return wrapper
def decorator_2(func):
#functools.wraps(func)
def wrapper(*args, **kwargs):
print('decorator_2')
print(args)
print(kwargs)
return func(*args, **kwargs)
wrapper.__wrapped__ = func
return wrapper
#decorator_1
def func_1(request):
return {'func': 'func_1'}
#decorator_2
def func_2(request):
return {'func': 'func_2'}
I would expect both wrapepd method to behave the same.
In decorator_1, I expect obj to be a request object and indeed it is.
In decorator_2, I would expect args[0] to be the same request object but it is not. It appears an additional first positional argument is passed before the request object.
def add_route(config, route, view, renderer="json"):
"""Helper for adding a new route-view pair."""
route_name = view.__name__
config.add_route(route_name, route)
config.add_view(view, route_name=route_name, renderer=renderer)
config = Configurator()
add_route(config, "/func_1", func_1)
add_route(config, "/func_2", func_2)
app = config.make_wsgi_app()
testapp = webtest.TestApp(app)
testapp.get("/func_1")
testapp.get("/func_2")
Output:
decorator_1
<class 'pyramid.request.Request'> GET /func_1 HTTP/1.0
Host: localhost:80
()
{}
decorator_2
(<pyramid.traversal.DefaultRootFactory object at 0x7f981da2ee48>, <Request at 0x7f981da2ea20 GET http://localhost/func_2>)
{}
Consequently, func_2 crashes because it receives a DefaultRootFactory object it does not expect.
I'd like to understand this discrepancy. How come the signature of the wrapper changes what pyramid passes to the wrapped function?
There is a mechanism at stake I don't understand, and I suspect it might be in Pyramid's logic.
I shared my findings in the webargs issue where this came up, but just in case anyone comes across this here:
Pyramid lets you write a view function with either of these signatures
def view(request):
...
def view(context, request):
...
The second calling convention is the original one, and the first is newer. So even though it is called an "alternate" in the pyramid docs, it is the default.
They use inspect.getfullargspec to see if the view takes a single positional parameter, and if so, wrap it to match the second convention. If the view doesn't match the first convention, it is assumed to match the second convention (which is false in this case).
I'm trying to use super in a subclass which is wrapped in another class using a class decorator:
def class_decorator(cls):
class WrapperClass(object):
def make_instance(self):
return cls()
return WrapperClass
class MyClass(object):
def say(self, x):
print(x)
#class_decorator
class MySubclass(MyClass):
def say(self, x):
super(MySubclass, self).say(x.upper())
However, the call to super fails:
>>> MySubclass().make_instance().say('hello')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in say
TypeError: super(type, obj): obj must be an instance or subtype of type
The problem is that, when say is called, MySubclass doesn't refer to the original class anymore, but to the return value of the decorator.
One possible solution would be to store the value of MySubclass before decorating it:
class MySubclass(MyClass):
def say(self, x):
super(_MySubclass, self).say(x.upper())
_MySubclass = MySubclass
MySubclass = class_decorator(MySubclass)
This works, but isn't intuitive and would need to be repeated for each decorated subclass. I'm looking for a way that doesn't need additional boilerplate for each decorated subclass -- adding more code in one place (say, the decorator) would be OK.
Update: In Python 3 this isn't a problem, since you can use __class__ (or the super variant without arguments), so the following works:
#class_decorator
class MySubclass(MyClass):
def say(self, x):
super().say(x.upper())
Unfortunately, I'm stuck with Python 2.7 for this project.
The problem is that your decorator returns a different class than python (or anyone who uses your code) expects. super not working is just one of the many unfortunate consequences:
>>> isinstance(MySubclass().make_instance(), MySubclass)
False
>>> issubclass(MySubclass, MyClass)
False
>>> pickle.dumps(MySubclass().make_instance())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
_pickle.PicklingError: Can't pickle <class '__main__.MySubclass'>: it's not the same object as __main__.MySubclass
This is why a class decorator should modify the class instead of returning a different one. The correct implementation would look like this:
def class_decorator(wrapped_cls):
#classmethod
def make_instance(cls):
return cls()
wrapped_cls.make_instance = make_instance
return wrapped_cls
Now super and everything else will work as expected:
>>> MySubclass().make_instance().say('hello')
HELLO
The problem occurs because at the time when MySubclass.say() is called, the global symbol MySubclass no longer refers to what's defined in your code as 'class MySubclass'. It is an instance of WrapperClass, which isn't in any way related to MySubclass.
If you are using Python3, you can get around this by NOT passing any arguments to 'super', like this:
super().say(x.upper())
I don't really know why you use the specific construct that you have, but it does look strange that a sub-class of MyClass that defines 'say()' - and has itself a 'say()' method in the source code would have to end up as something that does not have that method - which is the case in your code.
Note you could change the class WrapperClass line to make it read
class WrapperClass(cls):
this will make your wrapper a sub-class of the one you just decorated. This doesn't help with your super(SubClass, self) call - you still need to remove the args (which is OK only on Python3), but at least an instance created as x=MySubclass() would have a 'say' method, as one would expect at first glance.
EDIT: I've come up with a way around this, but it really looks odd and has the disadvantage of making the 'wrapped' class know that it is being wrapped (and it becomes reliant on that, making it unusable if you remove the decorator):
def class_decorator(cls):
class WrapperClass(object):
def make_instance(self):
i = cls()
i._wrapped = cls
return i
return WrapperClass
class MyClass(object):
def say(self, x):
print(x)
#class_decorator
class MySubclass(MyClass):
def say(self, x):
super(self._wrapped, self).say(x.upper())
# make_instance returns inst of the original class, non-decorated i = MySubclass().make_instance() i.say('hello')
In essence, _wrapped saves a class reference as it was at declaration time, consistent with using the regular super(this_class_name, self) builtin call.
I have the following code:
A decorator:
def pyDecorator(func):
print func
#wraps(func)
def wrapped(*args, **kwargs):
print args
print kwargs
tBegin = time()
result = func(*args, **kwargs)
tEnd = time()
if result:
# UI update
print("\nTBegin '{}'({} s)".format(func.__name__, tBegin))
# UI and report update
print("TEnd '{}' ({} s) ({} s) Result:{}".format(func.__name__, tEnd,tEnd - tBegin, result))
return result
#workarround to use the original function
wrapped._original=func
return wrapped
And a decorated class method:
class Dummy(object):
#pyDecorator
def ClassMethod(self):
print "Original class code executed"
return True
If I call the method for the original function in the following way, I receive this error "TypeError: ClassMethod() takes exactly 1 argument (0 given):"
ClassInstance.ClassMethod._original()
So I am forced to use the following call:
ClassInstance.ClassMethod._original(ClassInstance)
Is it possible to do this as in the first way ? I do not understand why I should put the class instance as a parameter when it is already provided.
ClassInstance.ClassMethod._original is a function not bound to any class instance.
Note that the transformation from function to method happens when a function object is accessed via a class instance, say, using dot reference. Here however, _original is only bound to another function object wrapper (elevated to a bound method at runtime) not to a class instance. An implicit self parameter is therefore not passed. You'll have to explicitly pass it.
ClassInstance.ClassMethod._original
^
|- instance ^
|- method
^
|- function object bound to method
I do not understand why I should put the class instance as a parameter
when it is already provided
No, it's not already provided.
I'm trying to combine DBUS' asynchronous method calls with Twisted's Deferreds, but I'm encountering trouble in tweaking the usual DBUS service method decorator to do this.
To use the DBUS async callbacks approach, you'd do:
class Service(dbus.service.Object):
#dbus.service.method(INTERFACE, async_callbacks=('callback', 'errback'))
def Resources(self, callback, errback):
callback({'Magic' : 42})
There's a few places where I simply wrap those two methods in a Deferred, so I thought I'd create a decorator to do that for me:
def twisted_dbus(*args, **kargs):
def decorator(real_func):
#dbus.service.method(*args, async_callbacks=('callback', 'errback'), **kargs)
def wrapped_func(callback, errback, *inner_args, **inner_kargs):
d = defer.Deferred()
d.addCallbacks(callback, errback)
return real_func(d, *inner_args, **inner_kargs)
return wrapped_func
return decorator
class Service(dbus.service.Object):
#twisted_dbus(INTERFACE)
def Resources(self, deferred):
deferred.callback({'Magic' : 42})
This, however, doesn't work since the method is bound and takes the first argument, resulting in this traceback:
$ python service.py
Traceback (most recent call last):
File "service.py", line 25, in <module>
class StatusCache(dbus.service.Object):
File "service.py", line 32, in StatusCache
#twisted_dbus(INTERFACE)
File "service.py", line 15, in decorator
#dbus.service.method(*args, async_callbacks=('callback', 'errback'), **kargs)
File "/usr/lib/pymodules/python2.6/dbus/decorators.py", line 165, in decorator
args.remove(async_callbacks[0])
ValueError: list.remove(x): x not in list
I could add an extra argument to the inner function there, like so:
def twisted_dbus(*args, **kargs):
def decorator(real_func):
#dbus.service.method(*args, async_callbacks=('callback', 'errback'), **kargs)
def wrapped_func(possibly_self, callback, errback, *inner_args, **inner_kargs):
d = defer.Deferred()
d.addCallbacks(callback, errback)
return real_func(possibly_self, d, *inner_args, **inner_kargs)
return wrapped_func
return decorator
But that seems... well, dumb. Especially if, for some reason, I want to export a non-bound method.
So is it possible to make this decorator work?
Why is it dumb? You're already assuming you know that the first positional argument (after self) is a Deferred. Why is it more dumb to assume that you know that the real first position argument is self?
If you also want to support free functions, then write another decorator and use that when you know there is no self argument coming.