I've a class Client which has many methods:
class Client:
def compute(self, arg):
#code
#more methods
All the methods of this class runs synchronously. I want to run them asynchronously. There are too many ways to accomplish this. But I'm thinking along these lines:
AsyncClient = make_async(Client) #make all methods of Client async, etc!
client = AsyncClient() #create an instance of AsyncClient
client.async_compute(arg) #compute asynchronously
client.compute(arg) #synchronous method should still exist!
Alright, that looks too ambitious, and I feel it can be done.
So far I've written this:
def make_async(cls):
class async_cls(cls): #derive from the given class
def __getattr__(self, attr):
for i in dir(cls):
if ("async_" + i) == attr:
#THE PROBLEM IS HERE
#how to get the method with name <i>?
return cls.__getattr__(i) # DOES NOT WORK
return async_cls
As you see the comment in the code above, the problem is to get the method given its name as string. How to do that? Once I get the method, I would wrap it in async_caller method, etc — the rest of the work I hope I can do myself.
The function __getattr__ just works with class instance, not class. Use getattr(cls, method_name) instead, it will solve the problem.
getattr(cls, method_name)
Related
I'm trying to create a class that calls another class attributes using __getattr__ in order to wrap the class calls.
from aiohttp import ClientSession
from contextlib import asynccontextmanager
class SessionThrottler:
def __init__(self, session: ClientSession,
time_period: int, max_tasks: int):
self._obj = session
self._bucket = AsyncLeakyBucket(max_tasks=max_tasks,
time_period=time_period)
def __getattr__(self, name):
#asynccontextmanager
async def _do(*args, **kwargs):
async with self._bucket:
res = await getattr(self._obj, name)(*args, **kwargs)
yield res
return _do
async def close(self):
await self._obj.close()
So then I can do:
async def fetch(session: ClientSession):
async with session.get('http://localhost:5051') as resp:
_ = resp
session = ClientSession()
session_throttled = SessionThrottler(session, 4, 2)
await asyncio.gather(
*[fetch(session_trottled)
for _ in range(10)]
)
This code works fine but how can I do so that session_throttled is inferred as a ClientSession instead of SessionThrottler (kinda like functools.wraps) ?
I depends on what you need with "is inferred as".
Making ThrotledSessions instances of ClientSessions
The natural way of doing that with classes is trough inheritance - if your SessionThrotler inherits from ClientSession, it would be naturally be a ClientSession as well.
The "small downside" is that then __getattr__ would not work as expected, since is only called for attributes not found in the instance - and Python would "see" the original methods from ClientSession in your ThrotledSession object and call those instead.
Of course, that would also require you to statically inherit your class, and you may want it to work dynamically. (by statically, I mean
having to write class SessionThrotler(ClientSession): - or at least, if there is a finite number of different Session classes you want to wrap, write for each a subclass inheriting from ThrotledClass as well:
class ThrotledClientSession(ThrotledSession, ClientSession):
...
If that is something that would work for you, then it is a matter of fixing attribute access by creating __getattribute__ instead of __getattr__. The difference between both is that __getattribte__ emcompasses all of the attribute lookup steps, and is called at the beggning of the lookup. Whereas __getattr__ is called as part of the normal lookup (inside the standard algorithm for __getattribute__) when all else fails.
class SessionThrottlerMixin:
def __init__(self, session: ClientSession,
time_period: int, max_tasks: int):
self._bucket = AsyncLeakyBucket(max_tasks=max_tasks,
time_period=time_period)
def __getattribute__(self, name):
attr = super().__getattribute__(name)
if not name.startswith("_") or not callable(attr):
return attr
#asynccontextmanager
async def _do(*args, **kwargs):
async with self._bucket:
res = await attr(*args, **kwargs)
yield res
return _do
class ThrotledClientSession(SessionThrottlerMixin, ClientSession):
pass
If you are getting your CLientSession instances from other code, and don't want, or can't, replace the base class with this one, you can do that on the desired instances, by assigning to the __class__ attribute:
it works if ClientSession is a normal Python class, not inheriting from special bases like Python built-ins, not using __slots__ and a few other restrictions - the instance is "converted" to a ThrotledClientSession midflight (but you have to do the inheritance thing): session.__class__ = ThrottledClientSession.
Class assignemnt in this way won't run the new class __init__. Since you need the _bucket to be created, you could have a class method that would create the bucket and make the replacement - so, in the version with __getattribute__ add something like:
class SessionThrottler:
...
#classmethod
def _wrap(cls, instance, time_period: int, max_tasks: int):
cls.__class__ = cls
instance._bucket = AsyncLeakyBucket(max_tasks=max_tasks,
time_period=time_period)
return instance
...
throtled_session = ThrotledClientSession._wrap(session, 4, 2)
If you have a lot of parent classes that you want to wrap this way, and you don't want to declare a Throttledversion of it, this could be made dynamically - but I would only go that way if it were the only way to go. Declaring some 10 stub Thotled versions, 3 lines each, would be preferable.
Virtual Subclassing
If you can change the code of your ClientSession classes (and others you want to wrap) this is the least obtrusive way -
Python have an obscure OOP feature called Virtual Subclassing in which a class can be registered as a subclass of another, without real inheritance. However, the class that is to be the "parent" have to have abc.ABCMeta as its metaclass - otherwise this is really unobtrusive.
Here is how it works:
In [13]: from abc import ABC
In [14]: class A(ABC):
...: pass
...:
In [15]: class B: # don't inherit
...: pass
In [16]: A.register(B)
Out[16]: __main__.B
In [17]: isinstance(B(), A)
Out[17]: True
So, in your original code, if you can make ClientSession to inherit from abc.ABC (without any other change at all) - and then do:
ClientSession.register(SessionThrottler)
and it would just work (if the "inferred as" you mean has to do with the object type).
Note that if ClientSession and others have a different metaclass, adding abc.ABC as one of its bases will fail with a metaclass conflict. If you can change their code, this is still the better way to go: just create a collaborative metaclass that inherits from both metaclasses, and you are all set:
class Session(metaclass=IDontCare):
...
from abc import ABCMeta
class ColaborativeMeta(ABCMeta, Session.__class__):
pass
class ClientSession(Session, metaclass=ColaborativeMeta):
...
Type hinting
If you don't need "isinstance" to work, and just have to be the same for the typing system, then it is just a matter of using typing.cast:
import typing as T
...
session = ClientSession()
session_throttled = T.cast(ClientSession, SessionThrottler(session, 4, 2))
The object is untouched at runtime - just the same object, but from that point on, tools like mypy will consider it to be an instance of ClientSession.
Last, but not least, change the class name.
So, if by "inferred as" you don't mean that the wrapped class should be seen as an instance, but just care about the class name showing correctly in logs and such, you can just set the class __name__ attribute to whatever string you want:
class SessionThrottler:
...
SessionThrottelr.__name__ = ClientSession.__name__
Or just have an appropriate __repr__ method on the wrapper class:
class SessionThrottler:
...
def __repr__(self):
return repr(self._obj)
This solution is based on patching the object methods (instead of wrapping them) given to the context manager.
import asyncio
import functools
import contextlib
class Counter:
async def infinite(self):
cnt = 0
while True:
yield cnt
cnt += 1
await asyncio.sleep(1)
def limited_infinite(f, limit):
#functools.wraps(f)
async def inner(*a, **kw):
cnt = 0
async for res in f(*a, **kw):
yield res
if cnt == limit:
break
cnt += 1
return inner
#contextlib.contextmanager
def throttler(limit, counter):
orig = counter.infinite
counter.infinite = limited_infinite(counter.infinite, limit)
yield counter
counter.infinite = orig
async def main():
with throttler(5, Counter()) as counter:
async for x in counter.infinite():
print('res: ', x)
if __name__ == "__main__":
asyncio.run(main())
For your case the it means patching every relevant methods of ClientSession(http methods only probably). Not sure if it is better though.
I'm trying to override the DaemonRunner in the python standard daemon process library (found here https://pypi.python.org/pypi/python-daemon/)
The DaemonRunner responds to command line arguments for start, stop, and restart, but I want to add a fourth option for status.
The class I want to override looks something like this:
class DaemonRunner(object):
def _start(self):
...etc
action_funcs = {'start': _start}
I've tried to override it like this:
class StatusDaemonRunner(DaemonRunner):
def _status(self):
...
DaemonRunner.action_funcs['status'] = _status
This works to some extent, but the problem is that every instance of DaemonRunner now have the new behaviour. Is it possible to override it without modifying every instance of DaemonRunner?
I would override action_functs to make it a non-static member of class StatusDaemonRunner(DaemonRunner).
In terms of code I would do:
class StatusDaemonRunner(runner.DaemonRunner):
def __init__(self, app):
self.action_funcs = runner.DaemonRunner.action_funcs.copy()
self.action_funcs['status'] = StatusDaemonRunner._status
super(StatusDaemonRunner, self).__init__(app)
def _status(self):
pass # do your stuff
Indeed, if we look at the getter in the implementation of DaemonRunner (here) we can see that it acess the attribute using self
def _get_action_func(self):
""" Return the function for the specified action.
Raises ``DaemonRunnerInvalidActionError`` if the action is
unknown.
"""
try:
func = self.action_funcs[self.action]
except KeyError:
raise DaemonRunnerInvalidActionError(
u"Unknown action: %(action)r" % vars(self))
return func
Hence the previous code should do the trick.
Say I have a class called Client that creates an object of the Request class and passes it to a method of a Connection object:
class Client(object):
def __init__(self, connection):
self._conn = connection
def sendText(plaintext):
self._conn.send(Request(0, plaintext))
And I want to assert the object passed into the Connection.send method to check its properties. I start with creating a mocked Connection class:
conn = Mock()
client = Client(conn)
client.sendText('some message')
And then I want something like:
conn.send.assert_called_with(
(Request,
{'type': 0, 'text': 'some message'})
)
Where 'type' and 'text' are properties of Request. Is there a way to do this in python's mock? All I found in the documentation were simple data examples.
I could have done it with mock.patch decorator by replacing the original 'send' method with a method which asserts the object's fields:
def patchedSend(self, req):
assert req.Type == 0
with mock.patch.object(Connection, 'send', TestClient.patchedSend):
...
but in this case I would have to define a separete mocked function for every method check and I couldn't check (without additional coding) if the function has been called at all.
You can get the last arguments to the mock with
request, = conn.send.call_args
and then assert properties about that. If you want facilities to express more sophisticated assertions about things, you can install PyHamcrest.
Note: Don't use assert in unit tests. Use assertion methods like assertEqual or assertTrue. Assertion methods can't be accidentally turned off, and they can give more useful messages than assert statements.
Well, I think that the easiest and better way of doing it, in this specific case, is to make a function to create requests and then mock it.
For instance, it'd be something like it:
class Client(object):
def __init__(self, connection):
self._conn = connection
def _create_request(self, plain_text):
return Request(0, plain_text)
def send_text(self, plaintext):
self._conn.send(self._create_request(plain_text))
And then, in the test, you could mock _create_request to return some specific value and then assert that send_text was called with it.
You could also get the parameters by call_args, as suggested, but I think it looks better this way.
I am trying to give a slight amount of genericness to my code . Basically what I am looking for is this .
I wish to write an API interface MyAPI :
class MyAPI(object):
def __init__(self):
pass
def upload(self):
pass
def download(self):
pass
class MyAPIEx(object):
def upload(self):
#specific implementation
class MyAPIEx2(object):
def upload(self)
#specific implementation
#Actual usage ...
def use_api():
obj = MyAPI()
obj.upload()
SO what I want is that based on a configuration I should be able to call the upload function
of either MyAPIEx or MyAPIEx2 . What is the exact design pattern I am looking for and how do I implement it in python.
You are looking for Factory method (or any other implementation of a factory).
Its really hard to say what pattern you are using, without more info. The way to instantiate MyAPI is indeed a Factory like #Darhazer mentioned, but it sounds more like you're interested in knowing about the pattern used for the MyAPI class hierarchy, and without more info we cant say.
I made some code improvements below, look for the comments with the word IMPROVEMENT.
class MyAPI(object):
def __init__(self):
pass
def upload(self):
# IMPROVEMENT making this function abstract
# This is how I do it, but you can find other ways searching on google
raise NotImplementedError, "upload function not implemented"
def download(self):
# IMPROVEMENT making this function abstract
# This is how I do it, but you can find other ways searching on google
raise NotImplementedError, "download function not implemented"
# IMPROVEMENT Notice that I changed object to MyAPI to inherit from it
class MyAPIEx(MyAPI):
def upload(self):
#specific implementation
# IMPROVEMENT Notice that I changed object to MyAPI to inherit from it
class MyAPIEx2(MyAPI):
def upload(self)
#specific implementation
# IMPROVEMENT changed use_api() to get_api(), which is a factory,
# call it to get the MyAPI implementation
def get_api(configDict):
if 'MyAPIEx' in configDict:
return MyAPIEx()
elif 'MyAPIEx2' in configDict:
return MyAPIEx2()
else
# some sort of an error
# Actual usage ...
# IMPROVEMENT, create a config dictionary to be used in the factory
configDict = dict()
# fill in the config accordingly
obj = get_api(configDict)
obj.upload()
I'm writing a class that interfaces to a MoinMoin wiki via xmlrpc (simplified code follows):
class MoinMoin(object):
token = None
def __init__(self, url, username=None, password=None):
self.wiki = xmlrpclib.ServerProxy(url + '/?action=xmlrpc2')
if username and password:
self.token = self.wiki.getAuthToken(username, password)
# some sample methods:
def searchPages(self, regexp):
def getPage(self, page):
def putPage(self, page):
now each of my methods needs to call the relevant xmlrpc method alone if there isn't authentication involved, or to wrap it in a multicall if there's auth. Example:
def getPage(self, page):
if not self.token:
result = self.wiki.getPage(page)
else:
mc = xmlrpclib.MultiCall(self.wiki) # build an XML-RPC multicall
mc.applyAuthToken(self.token) # call 1
mc.getPage(page) # call 2
result = mc()[-1] # run both, keep result of the latter
return result
is there any nicer way to do it other than repeating that stuff for each and every method?
Since I have to call arbitrary methods, wrap them with stuff, then call the identically named method on another class, select relevant results and give them back, I suspect the solution would involve meta-classes or similar esoteric (for me) stuff. I should probably look at xmlrpclib sources and see how it's done, then maybe subclass their MultiCall to add my stuff...
But maybe I'm missing something easier. The best I've come out with is something like:
def _getMultiCall(self):
mc = xmlrpclib.MultiCall(self.wiki)
if self.token:
mc.applyAuthToken(self.token)
return mc
def fooMethod(self, x):
mc = self._getMultiCall()
mc.fooMethod(x)
return mc()[-1]
but it still repeats the same three lines of code for each and every method I need to implement, just changing the called method name. Any better?
Python function are objects so they can be passed quite easily to other function.
def HandleAuthAndReturnResult(self, method, arg):
mc = xmlrpclib.MultiCall(self.wiki)
if self.token:
mc.applyAuthToken(self.token)
method(mc, arg)
return mc()[-1]
def fooMethod(self, x):
HandleAuthAndReturnResult(xmlrpclib.MultiCall.fooMethod, x)
There may be other way but I think it should work. Of course, the arg part needs to be aligned with what is needed for the method but all your methods take one argument.
Edit: I didn't understand that MultiCall was a proxy object. Even if the real method call ultimately is the one in your ServerProxy, you should not pass this method object in case MultiCall ever overrides(define) it. In this case, you could use the getattribute method with the method name you want to call and then call the returned function object. Take care to handle the AttributeError exception.
Methods would now look like:
def HandleAuthAndReturnResult(self, methodName, arg):
mc = xmlrpclib.MultiCall(self.wiki)
if self.token:
mc.applyAuthToken(self.token)
try:
methodToCall = getattr(mc, methodName)
except AttributeError:
return None
methodToCall(arg)
return mc()[-1]
def fooMethod(self, x):
HandleAuthAndReturnResult('fooMethod', x)