Using wxpython in MVC, I looked for a way to let the models tell the controllers about changes. I found (py)pubsub, which implements a global notification mechanism: Messages are sent to one place (the pubsub Publisher), which sends them to all subscribers. Each subscriber checks whether the message is interesting, and does what is needed.
From Smalltalk times, I know a more "local" approach: Each model object keeps a list of interested controllers, and only sends change notifications to these. No global publisher is involved. This can be implemented as part of the Model class, and works in much the same way, except it's local to the model and the controller.
Now is there a reason to use the global approach (which seems much less performant to me, and might be prone to all the issues related to global approaches)? Is there another package implementing a local observer?
Thanks!
I'm not really seeing the subtle difference here. As far as I know, pubsub is the way to go. It's included in wxPython in wx.lin.pubsub or you can download it from http://pubsub.sourceforge.net/. You can put the listeners just in the models and the publisher(s) just in the controller or however you need to. Here are a couple links to get you started:
http://www.blog.pythonlibrary.org/2010/06/27/wxpython-and-pubsub-a-simple-tutorial/
http://wiki.wxpython.org/WxLibPubSub
I've been playing around for a while to do MVC with wxpython and i know what you mean about pubsub being global.
The latest idea i've come up with is each view and model have there own observer.
The observers have weak references to their handlers and it all works in a separate thread so as to not block the GUI. To call back to the GUI thread I'm using wxAnyThread Gui method decorator.
There are 3 types of signal that get sent, for the model you can set which attributes are observed they automatically send out a signal when they are changed. then on both the model and the view you can send a message signal or a keyword signal. Each of the three signal types have to be unique per view or model as they are used to make a tuple that identify them.
model attributes
controller handlers are decorated with
#onAttr('attributeName')
def onModelAttributeName(self, attributeName)
When you bind to a method that handlers attributes it straight away calls the handler with its current value and then continues to observe changes.
Sending messages
Use the method
view/model.notify('Your message'):
The controller callback is decorated with
#onNotify('Your message')
def onYourMessage(self):
Sending keywords
Use the method
view/model.notifyKw(valid=True, value='this)
The controller callback is decorated with
#onNotifyKw('valid', 'value')
def onValidValueKw(self, valid, value)
The GUI is left knowing nothing about the models the only thing you add to the GUI is the view signaler, the controller attaches it self to this, so if you don't add a controller the view will just happily fire off messages to no one.
I've uploaded what i have so far on github
https://github.com/Yoriz/Y_Signal
https://github.com/Yoriz/Y_Mvc
Both have unit test which should give a bit of an example of what it does, but i will create some wxpython examples.
I'm using python version 2.7 and the Ysignals module requires
https://pypi.python.org/pypi/futures/2.1.3 for the threading.
Please take a look ill be interested in what someone else thinks of this way of approaching mvc or to point out something i seriously overlooked.
Related
I want to make changes in a model instance A, when a second model instance B
is saved,updated or deleted.
All models are in the same Django app.
What would be the optimal way to do it?
Should I use signals?
Override default methods[save, update,delete]?
Something else?
Django documentation warns:
Where possible you should opt for directly calling the handling code, rather than dispatching via a signal.
Can somebody elaborate on that statement?
The performance impact of your signal handlers depends of course on their functionality. But you should be aware of the fact that they are executed synchronously when the signal is fired, so if there's a lot of action going on in the handlers (for example a lot of database calls) the execution will be delayed.
As a self learner I've found the kivy binding action (mainly in python) a bit confusing. I've seen in many source code and/or examples in kivy that some of the properties are bounded to a callback in the constructor, while others are not (though used in those callback). My queries are:
Do I need to bind every created (or, custom) or provided (by kivy) properties to a callback (whenever needed to keep updated) ?
In any case which is the better option, bind or fbind ?
Does scheduling a callback have any advantages (particularly during widget creation)?
My corresponding assumptions are:
Only bind those properties that are inter-related or inter-dependent along with those used for canvas drawing. Rest will be managed or updated internally by kivy.
Almost results in the same thing (except depends on the implicit or explicit naming)
As per Kivy-doc widget creation is relatively slow, that's why scheduling is necessary.
(This is a re-post of a topic that I posted earlier with the same title but got no response from the community.)
I have not found any descriptive, informative article on this so far.
I need your proper guidance, thank you.
How does one properly structure a larger django website such as to retain testability and maintainability?
In the best django spirit (I hope) we started out by not caring too much about decoupling between different parts of our website. We did separate it into different apps, but those depend rather directly upon each other, through common use of model classes and direct method calls.
This is getting quite entangled. For example, one of our actions/services looks like this:
def do_apply_for_flat(user, flat, bid_amount):
assert can_apply(user, flat)
application = Application.objects.create(
user=user, flat=flat, amount=bid_amount,
status=Application.STATUS_ACTIVE)
events.logger.application_added(application)
mails.send_applicant_application_added(application)
mails.send_lessor_application_received(application)
return application
The function does not only perform the actual business process, no, it also handles event logging and sending mails to the involved users. I don't think there's something inherently wrong with this approach. Yet, it's getting more and more difficult to properly reason about the code and even test the application, as it's getting harder to separate parts intellectually and programmatically.
So, my question is, how do the big boys structure their applications such that:
Different parts of the application can be tested in isolation
Testing stays fast by only enabling parts that you really need for a specific test
Code coupling is reduced
My take on the problem would be to introduce a centralized signal hub (just a bunch of django signals in a single python file) which the single django apps may publish or subscribe to. The above example function would publish an application_added event, which the mails and events apps would listen to. Then, for efficient testing, I would disconnect the parts I don't need. This also increases decoupling considerably, as services don't need to know about sending mails at all.
But, I'm unsure, and thus very interested in what's the accepted practice for these kind of problems.
For testing, you should mock your dependencies. The logging and mailing component, for example, should be mocked during unit testing of the views. I would usually use python-mock, this allows your views to be tested independently of the logging and mailing component, and vice versa. Just assert that your views are calling the right service calls and mock the return value/side effect of the service call.
You should also avoid touching the database when doing tests. Instead try to use as much in memory objects as possible, instead of Application.objects.create(), defer the save() to the caller, so that you can test the services without having to actually have the Application in the database. Alternatively, patch out the save() method, so it won't actually save, but that's much more tedious.
Transfer some parts of your app to different microservices. This will make some parts of your app focused on doing one or two things right (e.g. event logging, emails). Code coupling is also reduced and different parts of the site can be tested in isolation as well.
The microservice architecture style involves developing a single application as a collection of smaller services that communicates usually via an API.
You might need to use a smaller framework like Flask.
Resources:
For more information on microservices click here:
http://martinfowler.com/articles/microservices.html
http://aurelavramescu.blogspot.com/2014/06/user-microservice-python-way.html
First, try to brake down your big task into smaller classes. Connect them with usual method calls or Django signals.
If you feel that the sub-tasks are independent enough, you can implement them as several Django applications in the same project. See the Django tutorial, which describes relation between applications and projects.
I am dealing with a python application that consists of multiple distributed lightweight components that communicate using RabbitMQ & Kombu.
A component listens on two queues and can receive multiple message types on each queue. Subclasses can override how each message type is processed by registering custom handlers.
All this works fine.
I now have the added requirement that each component must have a basic REST/HTML interface. The idea being you point your browser at the running component and get realtime information on what it is currently doing (what messages it is processing, cpu usage, state info, log, etc.)
It needs to be lightweight, so after some research I have settled on Flask (but am open to suggestions). In pseudocode this means taking:
class Component:
Queue A
Queue B
...
def setup(..):
# connect to the broker & other initialization
def start(..):
# start the event loop and wait for work
def handle_msg_on_A(self,msg):
# dispatch a msg to a handler depending on the msg type
def handle_msg_on_B(self,msg):
...
...
and adding a number of view methods:
#app.route('/')
def web_ui(self):
# render to a template
#app.route('/state')
def get_state(self):
# REST method to return some internal state info as JSON
...
However, bolting a web UI onto a class like this breaks SOLID principles and brings problems with inheritance (a subclass may want to display more/less information). Decorators are not inherited so every view method would need to be explicitly overridden and redecorated. Maybe using a mixin + reflection could work somehow but it feels hackish.
Instead, using composition could work: put the web stuff in a separate class that delegates the url routes to a fixed, predefined set of polymorphic methods on the nested component.
This way components remain unaware of Flask at the cost of some loss in flexibility (the set of available methods is fixed).
I have now discovered Flask blueprints and Application Dispatching and it looks like they could bring a better, more extensible solution. However, I have yet to wrap my head around them.
I feel like I am missing a design pattern here and hopefully somebody with more flask-fu or experience with this type of problem can comment.
Something else was quietly introduced in Flask 0.7 that might be of interest to you - Pluggable Views. These are class based rather than function based endpoints - so you can use the dispatch_request method to manage your state transitions (only overriding it when needed).
The benefit of doing it this way, as opposed to using Application Dispatching, is that you get url_for support all across your application (as opposed to having to hard code in URLs that cross application boundaries.) You'll have to decide if this is something that is likely to be an issue for your application.
In pseudo-code:
# File: Components.py
from flask.views import View
class Component(View):
# Define your Component-specific application logic here
dispatch_request(self, *url_args, **url_kwargs):
# Define route-specific logic that all Components should have here.
# Call Component-specific methods as necessary
class Tool_1(Component):
pass
class Tool_2(Component):
# Override methods here
# File: app.py
from flask import Flask
from yourapplication import Tool_1, Tool_2
app = Flask()
# Assuming you want to pass all additional parameters as one argument
app.add_url_rule("/tool_1/<path:options>", "tool1", view_func=Tool_1.as_view())
# Assuming you want to pass additional parameters separately
tool_2_view = Tool_2.as_view()
app.add_url_rule("/tool_2/", "tool2", view_func=tool_2_view )
app.add_url_rule("/tool_2/<option>", "tool2", view_func=tool_2_view)
app.add_url_rule("/tool_2/<option>/<filter>", "tool2", view_func=tool_2_view)
You can add blueprints to the mix if you have a series of components that are all logically connected together and you don't want to have to remember to put /prefix in front of each one's add_url_rule call. But if you just have a series of components that are mostly independent of each other, this is the pattern I'd use*.
*. On the other hand, if they need to be isolated from each other I'd use the Application Dispatch pattern recommended in the docs.
Im working on optimizing my design in terms of mvc, intent on simplifying the api of the view which is quite nested even though Iv built composite widgets(with there own events and/ pubsub messages) in an attempt to simpify things.
For example I have a main top level gui class a wxFrame which has a number of widgets including a notebook, the notebook contains a number of tabs some of which are notebooks that contain composite widgets. So to call the methods of one of these composite widgets from the controller I would have
self.gui.nb.sub_nb.composite_widget.method()
To create a suitable abstraction for the view I have created references to these widgets (whose methods need to be called in the controller) in the view like so
self.composite_widget = self.nb.sub_nb.composite_widget()
so that in the controller the call is now simplified to
self.gui.composite_widget.method()
Is this an acceptable way to create an abstraction layer for the gui?
Well that's definitely one way to handle the issue. I tend to use pubsub to call methods the old fashioned way though. Some people like pyDispatcher better than pubsub. The main problem with using multi-dot method calling is that it's hard to debug if you have to change a method name.