In Tornado there is an option to override write_error function of request handler to create your custom error page.
In my application there are many Handlers, and i want to create custom error page when i get code 500.
I thought to implement this by create Mixin class and all my handlers will inherit this mixing.
I would like to ask if there is better option to do it, maybe there is a way to configure application?
My workaround looks kinda similar as you're thinking of. I have a BaseHandler and all my handlers inherit this class.
class BaseHandler(tornado.web.RequestHandler):
def write_error(self, status_code, **kwargs):
"""Do your thing"""
I'm doing just like you mention. In order to do it you just have to create a class for each kind of error message and just override the write_error, like:
class BaseHandler(tornado.web.RequestHandler):
def common_method(self, arg):
pass
class SpecificErrorMessageHandler(tornado.web.RequestHandler):
def write_error(self, status_code, **kwargs):
if status_code == 404:
self.response(status_code,
'Resource not found. Check the URL.')
elif status_code == 405:
self.response(status_code,
'Method not allowed in this resource.')
else:
self.response(status_code,
'Internal server error on specific module.')
class ResourceHandler(BaseHandler, SpecificErrorMessageHandler):
def get(self):
pass
The final class will inherit only the specified.
Related
I've got a Django app and a message queue and I want to be able to switch between queue services easily (SQS or RabbitMQ for example).
So I set up a BaseQueue "interface":
class BaseQueue(ABC):
#abstractmethod
def send_message(self, queue_name, message, message_attributes=None):
pass
And two concrete classes that inherit from BaseQueue:
class SqsQueue(BaseQueue):
def send_message(self, queue_name, message, message_attributes=None):
# code to send message to SQS
class RabbitMqQueue(BaseQueue):
def send_message(self, queue_name, message, message_attributes=None):
# code to send message to RabbitMQ
Then in settings.py I've got a value pointing to the implementation the app should use:
QUEUE_SERVICE_CLS = "queues.sqs_queue.SqsQueue"
Because it's a Django app it's in settings.py, but this value could be coming from anywhere. It just says where the class is.
Then I've got a QueueFactory whose job is to return the queue service to use:
class QueueFactory:
#staticmethod
def default():
return import_string(settings.QUEUE_SERVICE_CLS)()
The factory imports the class and instantiates it.
I would then use it like so:
QueueFactory.default().send_message(queue_name, message)
It works, but I was wondering if there's a more Python way to do it? Like with some magic methods?
Im having some trouble with designing the exception classes for a Python web API. What I would like to do is have various exceptions set up with some default error codes/messages, but also allow the flexibility of creating a custom one.
Take the following code:
class APIException(Exception):
def __init__(self):
super().__init__(self.message)
#property
def message(self) -> str:
raise NotImplementedError
#property
def code(self) -> int:
raise NotImplementedError
#property
def response(self):
return {"error": self.message}, self.code
class UnknownException(APIException):
message = "An unknown error occurred."
code = 500
class UnauthorizedException(APIException):
message = "Unauthorized"
code = 401
This allows me to do things like raise UnauthorizedException, which work fine.
However, what I would like to be able to do is to raise arbitrary API exceptions, like raise APIException("This is a custom error", 404). Raising a set exception with arguments and raising an APIException without arguments do not need to be supported; I will not be raising them like that.
It doesn't seem I can do this cleanly with the way I have designed the inheritance above. I have tried other various approaches but none seem to be as clean as the example above.
What would be the best way to go about doing this sort of thing?
Have your APIException constructor take arguments, and have the subclasses implement constructors that provide those arguments:
class APIException(Exception):
def __init__(self, message, code):
super().__init__(message)
self.message = message
self.code = code
#property
def response(self):
return {"error": self.message}, self.code
class UnknownException(APIException):
def __init__():
super().__init__("An unknown error occurred.", 500)
class UnauthorizedException(APIException):
def __init__():
super().__init__("Unauthorized", 401)
I have the following code:
class Messenger(object):
def __init__(self):
# Class Type of what messages will be created as.
message_class = Message
def publish(self, body):
# Instantiate object of type stored in `message_class`
message = message_class(body)
message.publish()
I want to assert that the Message.publish() method is called. How do I achieve this?
I've already tried the following ways:
Assign message_class to Mock or Mock(). If I debug what message_class(body) returns, it is a Mock, but I don't seem to be able to get the instance and assert it (because the Mock I assign in my test is not the instance used, it is the Type).
Patch Message class with decorator. Whenever I do this it seems like it does not catch it. When I debug what message_class(body) returns its of Message type, not Mock.
Try to mock the __init__ method of message_class in hopes that I can set the instance that is returned whenever the code tries to Instantiate the message. Does not work, throws errors because the __init__ method is not suppose to have a return value.
If you were storing the actual instance, I'd say you could do something like messenger.message.publish.assert_called_once, but since message_class is being stored, it makes it slightly trickier. Given that, you can pull the return_value from the mocked class and check the call that way. Here's how I did it:
Messenger. Note the slight modification to assign message_class to self. I'm assuming you meant to do that, otherwise it wouldn't work without some global funkiness:
'''messenger.py'''
class Message(object):
def __init__(self, body):
self.body = body
def publish(self):
print('message published: {}'.format(self.body))
class Messenger(object):
def __init__(self):
# Class Type of what messages will be created as.
self.message_class = Message
def publish(self, body):
# Instantiate object of type stored in `message_class`
message = self.message_class(body)
message.publish()
Test:
'''test_messenger.py'''
from unittest import mock, TestCase
from messenger import Messenger
class TestMessenger(TestCase):
#mock.patch('messenger.Message')
def test_publish(self, mock_message):
messenger = Messenger()
messenger.publish('test body')
# .return_value gives the mock instance, from there you can make your assertions
mock_message.return_value.publish.assert_called_once()
I am trying to test the following method:
def my_method(self, request, context):
context.set_details('Already exists')
context.set_code(grpc.StatusCode.ALREADY_EXISTS)
To test it, I must pass in a request and a context (which is a grpc.ServicerContext object), like so:
import grcp
def test_my_method(self):
request = {"something": "something-else"}
context = grpc.ServicerContext()
my_method(request, context)
# Assert something here
The problem is, I get the following error when I run my tests:
TypeError: Can't instantiate abstract class ServicerContext with abstract methods add_callback, cancel, invocation_metadata, is_active, peer, send_initial_metadata, set_code, set_details, set_trailing_metadata, time_remaining
How can I get a grpc.ServicerContext object? If I can't, how do I test the method?
grpc.ServicerContext is an abstract class defined with the abc module. In your test you need to write your own concrete subclass of it and pass an instance of that to the method you are testing.
I have a settings object that contains some generic settings. These settings will change for each user. I'm wondering what would be the best way to code this. My current method is this:
class Settings(object):
def __init__(self, user=None):
if user and not isinstance(user, users.User):
raise TypeError('must be a User object')
self.user = user
#property
def title(self):
if self.user:
return 'user setting'
return 'generic setting'
Given that there will be a few methods in Settings, having to run that if statement each time kinda sucks.
I was considering having a UserSettings class that extends Settings to override the defaults and provide the user specific settings. Though, I've heard that overriding methods is bad OOP design. Which leads me to option 2...
I then thought of creating UserSettings but it won't extend Settings. It'll instead wrap it and I'll have something like:
class UserSettings(object):
def __init__(self, user=None):
if user and not isinstance(user, users.User):
raise TypeError('must be a User object')
self.user = user
self.settings = Settings()
#property
def title(self):
return 'user setting'
So I can then do:
print user_settings.title # get the user title
print user_settings.settings.title # get the generic title
How should I code this?
Overriding methods is not only not bad OOP design, it's the basis for subtype polymorphism, which is core to OOP, and a common way to get rid of the need for such conditional checks.
There are certainly times when you should prefer composition to inheritance, but it's not clear from your description that there's anything wrong with making this a subclass.