Python: make subclass of generic class generic by default - python

I want to be able to define what the contents of a subclass of a subclass of typing.Iterable have to be.
Type hints are critical for this project so I have to find a working solution
Here is a snip code of what I've already tried and what I want:
# The code I'm writing:
from typing import TypeVar, Iterable
T = TypeVar('T')
class Data:
pass
class GeneralPaginatedCursor(Iterable[T]):
"""
If this type pf cursor is used by an EDR a specific implementation has to be created
Handle paginated lists, exposes hooks to simplify retrieval and parsing of paginated data
"""
# Implementation
pass
###########
# This part is supposed to be written by different developers on a different place:
class PaginatedCursor(GeneralPaginatedCursor):
pass
def foo() -> GeneralPaginatedCursor[Data]:
"""
Works great
"""
pass
def bar() -> PaginatedCursor[Data]:
"""
Raises
Traceback (most recent call last):
.
.
.
def bar(self) -> PaginatedCursor[Data]:
File "****\Python\Python38-32\lib\typing.py", line 261, in inner
return func(*args, **kwds)
File "****\Python\Python38-32\lib\typing.py", line 894, in __class_getitem__
_check_generic(cls, params)
File "****\Python\Python38-32\lib\typing.py", line 211, in _check_generic
raise TypeError(f"{cls} is not a generic class")
"""
pass
I don't want to leave it to the other developers in the future to inherit from Iterable because if someone will miss it everything will break.
I found the exact same issue here:
https://github.com/python/cpython/issues/82640
But there is no answer.

The only requirements are that GeneralPaginatedCursor define __iter__ to return an Iterable value (namely, something with a __next__ method).
The error you see occurs because, since GeneralPaginatedCursor is generic in T, PaginatedCursor should be as well.
class PaginatedCursor(GeneralPaginatedCursor[T]):
pass

Related

TypeError: issubclass() arg 1 must be a class, while I am quite sure arg 1 is a class

Somewhere within my code I have the following line of code.
from inspect import isclass
if isclass(route.handler) and issubclass(route.handler, web.View):
Unfortunately this line of code gives the exception below in my production environment.
TypeError: issubclass() arg 1 must be a class
As far as I know, the Python (3.7.7) compiler will first check the first condition of the if statement and if this evaluates to false, it will not check the second condition. Therefore I must conclude that route.handler must be a class, and therefore the TypeError I am getting should not be occurring. Am I missing something here? Does someone know what might be causing this?
(Unfortunately I am not able to reproduce the error)
edit:
The error originates from the swagger-aiohttp package. Here's the entire traceback:
Traceback (most recent call last):
File "/var/www/app/main.py", line 249, in <module>
run_app(cfg)
File "/var/www/app/main.py", line 225, in run_app
setup_swagger(app, ui_version=SWAGGER_VERSION, swagger_url=SWAGGER_URL)
File "/home/webapp/.venv/lib/python3.7/site-packages/aiohttp_swagger/__init__.py", line 72, in setup_swagger
security_definitions=security_definitions
File "/home/webapp/.venv/lib/python3.7/site-packages/aiohttp_swagger/helpers/builders.py", line 163, in generate_doc_from_each_end_point
end_point_doc = _build_doc_from_func_doc(route)
File "/home/webapp/.venv/lib/python3.7/site-packages/aiohttp_swagger/helpers/builders.py", line 44, in _build_doc_from_func_doc
if isclass(route.handler) and issubclass(route.handler, web.View):
File "/home/webapp/.venv/lib/python3.7/abc.py", line 143, in __subclasscheck__
return _abc_subclasscheck(cls, subclass)
TypeError: issubclass() arg 1 must be a class
edit2:
The route.handler should be an aiohttp class-based view. For example this is how one would create one and build a swagger UI on top of that.
class PingHandler(web.View):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
async def get(request):
"""
---
description: This end-point allow to test that service is up.
tags:
- Health check
produces:
- text/plain
responses:
"200":
description: successful operation. Return "pong" text
"405":
description: invalid HTTP Method
"""
return web.Response(text="pong")
app = web.Application()
app.router.add_route('GET', "/ping", PingHandler)
setup_swagger(app, swagger_url="/api/v1/doc", ui_version=3)
In my current implementation I also have a decorator added to the Handler class.
edit3:
When debugging locally (where it's working fine), the route.handler seems to be a <class 'abc.ABCMeta'>.
I finally discovered the problem. The error is raised whenever a decorator from the wrapt library is used together with a abc.ABCMeta class. This is currently an open issue for the wrapt library. An example is shown below:
import abc
from inspect import isclass
import wrapt
class Base:
pass
class BaseWithMeta(metaclass=abc.ABCMeta):
pass
#wrapt.decorator
def pass_through(wrapped, instance, args, kwargs):
return wrapped(*args, **kwargs)
#pass_through
class B(BaseWithMeta):
pass
#pass_through
class C(Base):
pass
isclass(C)
>>>
True
issubclass(C, Base)
>>>
True
isclass(B)
>>>
True
issubclass(B, BaseWithMeta)
>>>
TypeError: issubclass() arg 1 must be a class

bonobo method failed after overriden

I am using a light ETL library called bonobo.
The csv writer bonobo.CsvWriter class has a factory method:
def writer_factory(self, file):
return csv.writer(file, **self.get_dialect_kwargs()).writerow
with the docs:
class CsvWriter(FileWriter, CsvHandler):
#Method(
__doc__='''
Builds the CSV writer, a.k.a an object we can pass a field collection to be written as one line in the
target file.
Defaults to builtin csv.writer(...).writerow, but can be overriden to fit your special needs.
'''
)
I'd like to add some extra parameters to customize my csv file, so I try to override it as such:
class quoCsvWriter(bonobo.CsvWriter):
def writer_factory(self, file):
return csv.writer(file, **self.get_dialect_kwargs(),quoting=csv.QUOTE_NONNUMERIC).writerow
when I add the node into the chain, the programs shows:
Traceback (most recent call last):
File "geocoding.py", line 162, in <module>
get_graph(),
File "geocoding.py", line 135, in get_graph
quoCsvWriter('db_addresses.csv')
File "/Users/xxxx/xxxx/lib/python3.6/site-packages/bonobo/config/configurables.py", line 152, in __new__
missing.remove(name)
KeyError: 'writer_factory'
Any hints are appreciated.
Update:
meanwhile when I try to do
bonobo.CsvWriter('filename.csv',quoting=csv.QUOTE_MINIMAL)
it throws error:
TypeError "quoting" must be an integer
As of bonobo 0.6, overriding directly Method instances in subclasses is non trivial. Instead, you should provide an overriden implementation in the constructor arguments.
def writer_factory(self, file):
return csv.writer(file, **{**self.get_dialect_kwargs(), 'quoting': csv.QUOTE_NONNUMERIC}).writerow
def get_graph(**options):
graph = bonobo.Graph()
graph.add_chain(
extract,
bonobo.CsvWriter('...', writer_factory=writer_factory),
)
return graph
If you really want to subclass for this use case, you can do it by overriding get_dialect_kwargs() method instead:
#use_context
class QuoteNonNumericCsvWriter(bonobo.CsvWriter):
def get_dialect_kwargs(self):
return {
**super().get_dialect_kwargs(),
'quoting': csv.QUOTE_NONNUMERIC,
}
This should work as expected.
Of course, overriding quoting is possible directly from writer constructor as of bonobo 0.6.2, there was a bug before around this but fix is now released.
def get_graph(**options):
graph = bonobo.Graph()
graph.add_chain(
extract,
bonobo.CsvWriter('...', quoting=csv.QUOTE_NONNUMERIC),
)
All three methods have the exact same behaviour, you should favour the last one.
Hope that helps.

Cannot unpickle Exception subclass

Simplified version of Why is my custom exception unpickle failing.
I am trying to pickle a 'simple' exception subclass. It pickles OK, but when unpickling it falls over:
import pickle
class ABError(Exception):
def __init__(self, a, b):
self.a = a
self.b = b
ab_err = ABError("aaaa", "bbbb")
pickled = pickle.dumps(ab_err)
original = pickle.loads(pickled) # Fails
Error:
Traceback (most recent call last):
File "p.py", line 12, in <module>
original = pickle.loads(pickled) # Fails
File "/usr/lib/python2.7/pickle.py", line 1388, in loads
return Unpickler(file).load()
File "/usr/lib/python2.7/pickle.py", line 864, in load
dispatch[key](self)
File "/usr/lib/python2.7/pickle.py", line 1139, in load_reduce
value = func(*args)
TypeError: __init__() takes exactly 3 arguments (1 given)
An earlier comment suggested the issue is because the built in Exception class supplies a __setstate_() method. However, it's not clear to me if this is expected behaviour or not - it certainly seems surprising since doing the same thing with a subclass of object works OK.
The BaseException class defines a custom __reduce__ method in exceptions.c, which returns the list of arguments to pass to __init__. Exact code is
if (self->args && self->dict)
return PyTuple_Pack(3, Py_TYPE(self), self->args, self->dict);
else
return PyTuple_Pack(2, Py_TYPE(self), self->args);
According to __reduce__ documentation,
the first item of the tuple is the callable to invoke. Here, that will be the exception class.
the second item is the tuple of arguments to pass to the callable. Here, that will be self.args.
the third item is a dict to merge into self.__dict__.
So from this, BaseException.__reduce__ will make unpickle invoke the exception's constructor with given args.
You have two options: either override __reduce__, or put the required arguments in self.args, either directly or by letting the parent class do it:
import pickle
class ABError(Exception):
def __init__(self, a, b):
self.a = a
self.b = b
# self.args = (a, b)
# maybe better, let base class's __init__ do it =>
super(ABError, self).__init__(a, b)
ab_err = ABError("aaaa", "bbbb")
pickled = pickle.dumps(ab_err)
original = pickle.loads(pickled) # no longer fails
Note that the original issue comes from the rather naive way BaseException pickle handling works. It is fixed in the latest python3 releases. Your question's original code works fine on python 3.5 for instance.

multiprocessing and modules

I am attempting to use multiprocessing to call derived class member function defined in a different module. There seem to be several questions dealing with calling class methods from the same module, but none from different modules. For example, if I have the following structure:
main.py
multi/
__init__.py (empty)
base.py
derived.py
main.py
from multi.derived import derived
from multi.base import base
if __name__ == '__main__':
base().multiFunction()
derived().multiFunction()
base.py
import multiprocessing;
# The following two functions wrap calling a class method
def wrapPoolMapArgs(classInstance, functionName, argumentLists):
className = classInstance.__class__.__name__
return zip([className] * len(argumentLists), [functionName] * len(argumentLists), [classInstance] * len(argumentLists), argumentLists)
def executeWrappedPoolMap(args, **kwargs):
classType = eval(args[0])
funcType = getattr(classType, args[1])
funcType(args[2], args[3:], **kwargs)
class base:
def multiFunction(self):
mppool = multiprocessing.Pool()
mppool.map(executeWrappedPoolMap, wrapPoolMapArgs(self, 'method', range(3)))
def method(self,args):
print "base.method: " + args.__str__()
derived.py
from base import base
class derived(base):
def method(self,args):
print "derived.method: " + args.__str__()
Output
base.method: (0,)
base.method: (1,)
base.method: (2,)
Traceback (most recent call last):
File "e:\temp\main.py", line 6, in <module>
derived().multiFunction()
File "e:\temp\multi\base.py", line 15, in multiFunction
mppool.map(executeWrappedPoolMap, wrapPoolMapArgs(self, 'method', range(3)))
File "C:\Program Files\Python27\lib\multiprocessing\pool.py", line 251, in map
return self.map_async(func, iterable, chunksize).get()
File "C:\Program Files\Python27\lib\multiprocessing\pool.py", line 567, in get
raise self._value
NameError: name 'derived' is not defined
I have tried fully qualifying the class name in the wrapPoolMethodArgs method, but that just gives the same error, saying multi is not defined.
Is there someway to achieve this, or must I restructure to have all classes in the same package if I want to use multiprocessing with inheritance?
This is almost certainly caused by the ridiculous eval based approach to dynamically invoking specific code.
In executeWrappedPoolMap (in base.py), you convert a str name of a class to the class itself with classType = eval(args[0]). But eval is executed in the scope of executeWrappedPoolMap, which is in base.py, and can't find derived (because the name doesn't exist in base.py).
Stop passing the name, and pass the class object itself, passing classInstance.__class__ instead of classInstance.__class__.__name__; multiprocessing will pickle it for you, and you can use it directly on the other end, instead of using eval (which is nearly always wrong; it's code smell of the strongest sort).
BTW, the reason the traceback isn't super helpful is that the exception is raised in the worker, caught, pickle-ed, and sent back to the main process and re-raise-ed. The traceback you see is from that re-raise, not where the NameError actually occurred (which was in the eval line).

How to declare #staticmethod in zope.interface

I try to create interface with #staticmethod and #classmethod. Declaring class method is simple. But I can't find the correct way to declare static method.
Consider class interface and its implementation:
#!/usr/bin/python3
from zope.interface import Interface, implementer, verify
class ISerializable(Interface):
def from_dump(slice_id, intex_list, input_stream):
'''Loads from dump.'''
def dump(out_stream):
'''Writes dump.'''
def load_index_list(input_stream):
'''staticmethod'''
#implementer(ISerializable)
class MyObject(object):
def dump(self, out_stream):
pass
#classmethod
def from_dump(cls, slice_id, intex_list, input_stream):
return cls()
#staticmethod
def load_index_list(stream):
pass
verify.verifyClass(ISerializable, MyObject)
verify.verifyObject(ISerializable, MyObject())
verify.verifyObject(ISerializable, MyObject.from_dump(0, [], 'stream'))
Output:
Traceback (most recent call last):
File "./test-interface.py", line 31, in <module>
verify.verifyClass(ISerializable, MyObject)
File "/usr/local/lib/python3.4/dist-packages/zope/interface/verify.py", line 102, in verifyClass
return _verify(iface, candidate, tentative, vtype='c')
File "/usr/local/lib/python3.4/dist-packages/zope/interface/verify.py", line 97, in _verify
raise BrokenMethodImplementation(name, mess)
zope.interface.exceptions.BrokenMethodImplementation: The implementation of load_index_list violates its contract
because implementation doesn't allow enough arguments.
How should I correctly declare static method in this interface?
Obviously the verifyClass does not understand either classmethod or staticmethod properly. The problem is that in Python 3, if you do getattr(MyObject, 'load_index_list') in Python 3, you get a bare function, and verifyClass thinks it is yet another unbound method, and then expects that the implicit self be the first argument.
The easiest fix is to use a classmethod there instead of a staticmethod.
I guess someone could also do a bug report.

Categories

Resources