I've got a Django app and a message queue and I want to be able to switch between queue services easily (SQS or RabbitMQ for example).
So I set up a BaseQueue "interface":
class BaseQueue(ABC):
#abstractmethod
def send_message(self, queue_name, message, message_attributes=None):
pass
And two concrete classes that inherit from BaseQueue:
class SqsQueue(BaseQueue):
def send_message(self, queue_name, message, message_attributes=None):
# code to send message to SQS
class RabbitMqQueue(BaseQueue):
def send_message(self, queue_name, message, message_attributes=None):
# code to send message to RabbitMQ
Then in settings.py I've got a value pointing to the implementation the app should use:
QUEUE_SERVICE_CLS = "queues.sqs_queue.SqsQueue"
Because it's a Django app it's in settings.py, but this value could be coming from anywhere. It just says where the class is.
Then I've got a QueueFactory whose job is to return the queue service to use:
class QueueFactory:
#staticmethod
def default():
return import_string(settings.QUEUE_SERVICE_CLS)()
The factory imports the class and instantiates it.
I would then use it like so:
QueueFactory.default().send_message(queue_name, message)
It works, but I was wondering if there's a more Python way to do it? Like with some magic methods?
Related
I have the following code:
class Messenger(object):
def __init__(self):
# Class Type of what messages will be created as.
message_class = Message
def publish(self, body):
# Instantiate object of type stored in `message_class`
message = message_class(body)
message.publish()
I want to assert that the Message.publish() method is called. How do I achieve this?
I've already tried the following ways:
Assign message_class to Mock or Mock(). If I debug what message_class(body) returns, it is a Mock, but I don't seem to be able to get the instance and assert it (because the Mock I assign in my test is not the instance used, it is the Type).
Patch Message class with decorator. Whenever I do this it seems like it does not catch it. When I debug what message_class(body) returns its of Message type, not Mock.
Try to mock the __init__ method of message_class in hopes that I can set the instance that is returned whenever the code tries to Instantiate the message. Does not work, throws errors because the __init__ method is not suppose to have a return value.
If you were storing the actual instance, I'd say you could do something like messenger.message.publish.assert_called_once, but since message_class is being stored, it makes it slightly trickier. Given that, you can pull the return_value from the mocked class and check the call that way. Here's how I did it:
Messenger. Note the slight modification to assign message_class to self. I'm assuming you meant to do that, otherwise it wouldn't work without some global funkiness:
'''messenger.py'''
class Message(object):
def __init__(self, body):
self.body = body
def publish(self):
print('message published: {}'.format(self.body))
class Messenger(object):
def __init__(self):
# Class Type of what messages will be created as.
self.message_class = Message
def publish(self, body):
# Instantiate object of type stored in `message_class`
message = self.message_class(body)
message.publish()
Test:
'''test_messenger.py'''
from unittest import mock, TestCase
from messenger import Messenger
class TestMessenger(TestCase):
#mock.patch('messenger.Message')
def test_publish(self, mock_message):
messenger = Messenger()
messenger.publish('test body')
# .return_value gives the mock instance, from there you can make your assertions
mock_message.return_value.publish.assert_called_once()
Im trying to follow this Celery Based Background Tasks to create a celery settings for a simple application.
In my task.py
from celery import Celery
def make_celery(app):
celery = Celery(app.import_name, backend=app.config['CELERY_RESULT_BACKEND'],
broker=app.config['CELERY_BROKER_URL'])
celery.conf.update(app.config)
TaskBase = celery.Task
class ContextTask(TaskBase):
abstract = True
def __call__(self, *args, **kwargs):
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs)
celery.Task = ContextTask
return celery
This method works in the app.py of main flask application.
from flask import Flask
flask_app = Flask(__name__)
flask_app.config.update(
CELERY_BROKER_URL='redis://localhost:6379',
CELERY_RESULT_BACKEND='redis://localhost:6379'
)
celery = make_celery(flask_app)
#celery.task()
def add_together(a, b):
return a + b
My use case is I want to create another module helpers.py where I
can define a collections of asynchronous classes. To separate
celery based methods and make it modular.
What I did is call the task.py module to other module helpers.py in order to create a class AsyncMail to handle email action background work.
from task import make_celery
class AsyncMail(object):
def __init__(self, app):
"""
:param app: An instance of a flask application.
"""
self.celery = make_celery(app)
def send(self, msg):
print(msg)
Now how can I access self.celery attribute to be a decorator for any method of the class?
#celery.task()
def send(self, msg):
print(msg)
If it impossible, what other alternative steps in order to achieved this problem?
You can't do what you're trying to do. At the time the class is being defined, there is no self, much less self.celery, to call, so you can't use #self.celery. Even if you had some kind of time machine, there could be 38 different AsyncMail instances created, and which one's self.celery would you want here?
Before getting into how you could do what you want, are you sure you want to? Do you actually want each AsyncMail object to have it own separate Celery? Normally you only have one per app, which is why normally this doesn't come up.
If you really wanted to, you could give each instance decorated methods after you have an object to decorate them with. But it's going to be ugly.
def __init__(self, app):
self.celery = make_celery(app)
# We need to get the function off the class, not the bound method off self
send = type(self).send
# Then we decorate it manually—this is all #self.celery.task does
send = self.celery.task(send)
# Then we manually bind it as a method
send = send.__get__(self)
# And now we can store it as an instance attribute, shadowing the class's
self.send = send
Or, if you prefer to put it all together in one line:
self.send = self.celery.task(type(self).send).__get__(self)
For Python 2, the "function off the class" is actually an unbound method, and IIRC you have to call __get__(self, type(self)) to turn it into a bound method at the end, but otherwise it should all be the same.
I've got flask running in a class
from flask import Flask
class Server:
app = Flask(__name__)
def __init__(self, object_with_data):
self.object = object_with_data
#app.route("/")
def hello(self):
return self.object.get_data()
def run():
self.app.run(host='0.0.0.0')
but I get an error saying
TypeError: hello() missing 1 required positional argument: 'self'.
How do I get access to an instance of the object when running flask within a class?
One dirty solution I thought of was using sockets to create a link between my server class and the object_with_data class, but I'm sure someone will know a better way to do this.
I read something about event handlers, but I don't fully understand that.
I have a tasks.py that contains a subclass of Task.
According to the docs the base class is instantiated only once per tasks.
But this is only true for same tasks method. Calling a different task creates a new instance. So I can't access sessions via get_sessions created with create_session. How may I have only a single instance that is shared between different tasks?
class AuthentificationTask(Task):
connections = {}
def login(self, user, password, server):
if not user in self.connections:
self.connections = {user: ServerConnection(verbose=True)}
# from celery.contrib import rdb
# rdb.set_trace()
self.connections[user].login(user=user, password=password, server=server)
#task(bind=True, max_retries=1, queue='test', base=AuthentificationTask)
def create_session(self, user, password, server):
self.login(user, password, server)
#task(bind=True, max_retries=1, queue='test', base=AuthentificationTask)
def get_sessions(self, user, password, server):
return self.connections[user].sessions
Set the task_cls arg for your Celery application like this:
class AuthentificationTask(Task):
def example(self):
logger.info('AuthentificationTask.example() method was called')
#celery.task(bind=True)
def test_my_task(self):
# call AuthentificationTask.example
self.example()
app = celery.Celery(
__name__,
broker='redis://localhost:6379/0',
task_cls=AuthentificationTask,
# other args
)
In this case will be use your custom class for all tasks as default.
Seems this was an issue on my site caused by reinitialising self.connections each time.
self.connections = {user: ServerConnection(verbose=True)}
In further tests base was instantiated only once for all (different) tasks. Thanks #Danila Ganchar for suggesting an alternative approach. I will give it a try!
You're on the right track by making connections a class variable on AuthentificationTask. That makes it available as a property on the class itself (i.e. as AuthentificationTask.connections). When you reference self.connections in the login method, I believe Python is looking for an instance variable connections, not the class variable of the same name. For the desired behavior, replace self.connections (in both login and get_sessions) with AuthentificationTask.connections.
In Tornado there is an option to override write_error function of request handler to create your custom error page.
In my application there are many Handlers, and i want to create custom error page when i get code 500.
I thought to implement this by create Mixin class and all my handlers will inherit this mixing.
I would like to ask if there is better option to do it, maybe there is a way to configure application?
My workaround looks kinda similar as you're thinking of. I have a BaseHandler and all my handlers inherit this class.
class BaseHandler(tornado.web.RequestHandler):
def write_error(self, status_code, **kwargs):
"""Do your thing"""
I'm doing just like you mention. In order to do it you just have to create a class for each kind of error message and just override the write_error, like:
class BaseHandler(tornado.web.RequestHandler):
def common_method(self, arg):
pass
class SpecificErrorMessageHandler(tornado.web.RequestHandler):
def write_error(self, status_code, **kwargs):
if status_code == 404:
self.response(status_code,
'Resource not found. Check the URL.')
elif status_code == 405:
self.response(status_code,
'Method not allowed in this resource.')
else:
self.response(status_code,
'Internal server error on specific module.')
class ResourceHandler(BaseHandler, SpecificErrorMessageHandler):
def get(self):
pass
The final class will inherit only the specified.