I try to create interface with #staticmethod and #classmethod. Declaring class method is simple. But I can't find the correct way to declare static method.
Consider class interface and its implementation:
#!/usr/bin/python3
from zope.interface import Interface, implementer, verify
class ISerializable(Interface):
def from_dump(slice_id, intex_list, input_stream):
'''Loads from dump.'''
def dump(out_stream):
'''Writes dump.'''
def load_index_list(input_stream):
'''staticmethod'''
#implementer(ISerializable)
class MyObject(object):
def dump(self, out_stream):
pass
#classmethod
def from_dump(cls, slice_id, intex_list, input_stream):
return cls()
#staticmethod
def load_index_list(stream):
pass
verify.verifyClass(ISerializable, MyObject)
verify.verifyObject(ISerializable, MyObject())
verify.verifyObject(ISerializable, MyObject.from_dump(0, [], 'stream'))
Output:
Traceback (most recent call last):
File "./test-interface.py", line 31, in <module>
verify.verifyClass(ISerializable, MyObject)
File "/usr/local/lib/python3.4/dist-packages/zope/interface/verify.py", line 102, in verifyClass
return _verify(iface, candidate, tentative, vtype='c')
File "/usr/local/lib/python3.4/dist-packages/zope/interface/verify.py", line 97, in _verify
raise BrokenMethodImplementation(name, mess)
zope.interface.exceptions.BrokenMethodImplementation: The implementation of load_index_list violates its contract
because implementation doesn't allow enough arguments.
How should I correctly declare static method in this interface?
Obviously the verifyClass does not understand either classmethod or staticmethod properly. The problem is that in Python 3, if you do getattr(MyObject, 'load_index_list') in Python 3, you get a bare function, and verifyClass thinks it is yet another unbound method, and then expects that the implicit self be the first argument.
The easiest fix is to use a classmethod there instead of a staticmethod.
I guess someone could also do a bug report.
Related
I want to be able to define what the contents of a subclass of a subclass of typing.Iterable have to be.
Type hints are critical for this project so I have to find a working solution
Here is a snip code of what I've already tried and what I want:
# The code I'm writing:
from typing import TypeVar, Iterable
T = TypeVar('T')
class Data:
pass
class GeneralPaginatedCursor(Iterable[T]):
"""
If this type pf cursor is used by an EDR a specific implementation has to be created
Handle paginated lists, exposes hooks to simplify retrieval and parsing of paginated data
"""
# Implementation
pass
###########
# This part is supposed to be written by different developers on a different place:
class PaginatedCursor(GeneralPaginatedCursor):
pass
def foo() -> GeneralPaginatedCursor[Data]:
"""
Works great
"""
pass
def bar() -> PaginatedCursor[Data]:
"""
Raises
Traceback (most recent call last):
.
.
.
def bar(self) -> PaginatedCursor[Data]:
File "****\Python\Python38-32\lib\typing.py", line 261, in inner
return func(*args, **kwds)
File "****\Python\Python38-32\lib\typing.py", line 894, in __class_getitem__
_check_generic(cls, params)
File "****\Python\Python38-32\lib\typing.py", line 211, in _check_generic
raise TypeError(f"{cls} is not a generic class")
"""
pass
I don't want to leave it to the other developers in the future to inherit from Iterable because if someone will miss it everything will break.
I found the exact same issue here:
https://github.com/python/cpython/issues/82640
But there is no answer.
The only requirements are that GeneralPaginatedCursor define __iter__ to return an Iterable value (namely, something with a __next__ method).
The error you see occurs because, since GeneralPaginatedCursor is generic in T, PaginatedCursor should be as well.
class PaginatedCursor(GeneralPaginatedCursor[T]):
pass
ctypes has a classmethod from_buffer. I'm trying to add some custom processing to from_buffer() in a subclass, but I'm having trouble calling super(). Here is an example:
from ctypes import c_char, Structure
class Works(Structure):
_fields_ = [
("char", c_char),
]
class DoesntWork(Works):
#classmethod
def from_buffer(cls, buf):
print "do some extra stuff"
return super(DoesntWork, cls).from_buffer(buf)
print Works.from_buffer(bytearray('c')).char
print DoesntWork.from_buffer(bytearray('c')).char
This results in the error:
c
do some extra stuff
Traceback (most recent call last):
File "superctypes.py", line 18, in <module>
print DoesntWork.from_buffer(bytearray('c')).char
File "superctypes.py", line 14, in from_buffer
return super(DoesntWork, cls).from_buffer(buf)
AttributeError: 'super' object has no attribute 'from_buffer'
What am I missing? Why doesn't super work here?
from_buffer is not actually a class method on Structure; it is a method on Structure's type (that is, its metaclass). As such, it can't be overridden in the usual fashion: it's like asking to override a normal method for a single object, not a class.
Calling type(cls).from_buffer(cls,buf) works. It's pretty terrible, but I don't immediately see another option.
I am attempting to use multiprocessing to call derived class member function defined in a different module. There seem to be several questions dealing with calling class methods from the same module, but none from different modules. For example, if I have the following structure:
main.py
multi/
__init__.py (empty)
base.py
derived.py
main.py
from multi.derived import derived
from multi.base import base
if __name__ == '__main__':
base().multiFunction()
derived().multiFunction()
base.py
import multiprocessing;
# The following two functions wrap calling a class method
def wrapPoolMapArgs(classInstance, functionName, argumentLists):
className = classInstance.__class__.__name__
return zip([className] * len(argumentLists), [functionName] * len(argumentLists), [classInstance] * len(argumentLists), argumentLists)
def executeWrappedPoolMap(args, **kwargs):
classType = eval(args[0])
funcType = getattr(classType, args[1])
funcType(args[2], args[3:], **kwargs)
class base:
def multiFunction(self):
mppool = multiprocessing.Pool()
mppool.map(executeWrappedPoolMap, wrapPoolMapArgs(self, 'method', range(3)))
def method(self,args):
print "base.method: " + args.__str__()
derived.py
from base import base
class derived(base):
def method(self,args):
print "derived.method: " + args.__str__()
Output
base.method: (0,)
base.method: (1,)
base.method: (2,)
Traceback (most recent call last):
File "e:\temp\main.py", line 6, in <module>
derived().multiFunction()
File "e:\temp\multi\base.py", line 15, in multiFunction
mppool.map(executeWrappedPoolMap, wrapPoolMapArgs(self, 'method', range(3)))
File "C:\Program Files\Python27\lib\multiprocessing\pool.py", line 251, in map
return self.map_async(func, iterable, chunksize).get()
File "C:\Program Files\Python27\lib\multiprocessing\pool.py", line 567, in get
raise self._value
NameError: name 'derived' is not defined
I have tried fully qualifying the class name in the wrapPoolMethodArgs method, but that just gives the same error, saying multi is not defined.
Is there someway to achieve this, or must I restructure to have all classes in the same package if I want to use multiprocessing with inheritance?
This is almost certainly caused by the ridiculous eval based approach to dynamically invoking specific code.
In executeWrappedPoolMap (in base.py), you convert a str name of a class to the class itself with classType = eval(args[0]). But eval is executed in the scope of executeWrappedPoolMap, which is in base.py, and can't find derived (because the name doesn't exist in base.py).
Stop passing the name, and pass the class object itself, passing classInstance.__class__ instead of classInstance.__class__.__name__; multiprocessing will pickle it for you, and you can use it directly on the other end, instead of using eval (which is nearly always wrong; it's code smell of the strongest sort).
BTW, the reason the traceback isn't super helpful is that the exception is raised in the worker, caught, pickle-ed, and sent back to the main process and re-raise-ed. The traceback you see is from that re-raise, not where the NameError actually occurred (which was in the eval line).
I came across a bug in production, even though it should have been tested by the unit tests.
class Stage2TaskView(MethodView):
def post(self):
json_data = json.loads(request.data)
news_url_string = json_data['news_url_string']
OpenCalais().generate_tags_for_news(news_url_string) // ?
return "", 201
This used to be a static:
OpenCalais.generate_tags_for_news(news_url_string)
But then I changed the method and removed the static decorator.
But I forgot to change that line to
OpenCalais().generate_tags_for_news(news_url_string)
The test doesn't see it though. How can I test this in future?
#mock.patch('news.opencalais.opencalais.OpenCalais.generate_tags_for_news')
def test_url_stage2_points_to_correct_class(self, mo):
rv = self.client.post('/worker/stage-2', data=json.dumps({'news_url_string': 'x'}))
self.assertEqual(rv.status_code, 201)
Autospeccing is your fried! Use autospec=True in patch decorator will check the complete signature:
class A():
def no_static_method(self):
pass
with patch(__name__+'.A.no_static_method', autospec=True):
A.no_static_method()
will raise an exception:
Traceback (most recent call last):
File "/home/damico/PycharmProjects/mock_import/autospec.py", line 9, in <module>
A.no_static_method()
TypeError: unbound method no_static_method() must be called with A instance as first argument (got nothing instead)
I know how to override an object's getattr() to handle calls to undefined object functions. However, I would like to achieve the same behavior for the builtin getattr() function. For instance, consider code like this:
call_some_undefined_function()
Normally, that simply produces an error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'call_some_undefined_function' is not defined
I want to override getattr() so that I can intercept the call to "call_some_undefined_function()" and figure out what to do.
Is this possible?
I can only think of a way to do this by calling eval.
class Global(dict):
def undefined(self, *args, **kargs):
return u'ran undefined'
def __getitem__(self, key):
if dict.has_key(self, key):
return dict.__getitem__(self, key)
return self.undefined
src = """
def foo():
return u'ran foo'
print foo()
print callme(1,2)
"""
code = compile(src, '<no file>', 'exec')
globals = Global()
eval(code, globals)
The above outputs
ran foo
ran undefined
You haven't said why you're trying to do this. I had a use case where I wanted to be capable of handling typos that I made during interactive Python sessions, so I put this into my python startup file:
import sys
import re
def nameErrorHandler(type, value, traceback):
if not isinstance(value, NameError):
# Let the normal error handler handle this:
nameErrorHandler.originalExceptHookFunction(type, value, traceback)
name = re.search(r"'(\S+)'", value.message).group(1)
# At this point we know that there was an attempt to use name
# which ended up not being defined anywhere.
# Handle this however you want...
nameErrorHandler.originalExceptHookFunction = sys.excepthook
sys.excepthook = nameErrorHandler
Hopefully this is helpful for anyone in the future who wants to have a special error handler for undefined names... whether this is helpful for the OP or not is unknown since they never actually told us what their intended use-case was.