I spent quite some time to build complex apps in Dash. I have interfaces for "custom components", wrappers for id-property pairs, a way to keep track of my component ids and means to re-use the components I created.
I'm using the several transforms provided by dash-extensions, without which some of my solutions would not work.
I mirrored custom types in my program logic code by providing converter classes, which convert custom type objects (e.g. Measurement) to and from dictionary representations, which I can then pass through dash callbacks.
Now, in a couple of situations, we are actually using Dash as a "desktop app", thus, it is not actually run on a server, but localhost, and there is only ever one user. Thus, the "no globals" restriction is void, isn't it? I could safely rely on global constants in my Dash app, if I only run it on localhost, correct?
Moreover, I create my custom components inside a class, which also has a method registerCallbacks() that creates all callbacks necessary for the component. These callbacks could then also rely on class members, correct? So I could save the converter functions and would not need to pass my "objects" via Inputs/Outputs.
Are there any problems with this idea?
Related
I am writing a program which interacts with many external services such as Gmail and Discord through their respective SDKs. The problem I am running into is that the program makes a lot of network calls and runs expensive computations in development I would rather avoid with stub objects. The SDKs I am using expose their functionality through standard Python classes with type hints. At the moment, I am creating stubs for them manually but it would not be feasible in the long run.
For example, this is a simplified example to illustrate what I am trying to achieve.
#dataclass
class EmailReceipt:
receiver_email_address:str
email_text:str
...
class GmailService:
...
def send_email(receiver_email:str)->EmailReceipt:
"A network call is made to send email address here"
# More methods follow
...
class GmailServiceStub:
def send_email((receiver_email:str))->>EmailReceipt:
"The code instantiates a random object of class EmailReceipt and returns it"
# More stub methods follow
In development I would like to avoid making a request to the mail server, so I am creating a mock class. The codebase uses dependency injection throughout so it is trivial to swap different versions of GmailService class. I am using mocked versions of external servives for rapid development but I think it could also be used for testing.
All I am doing here is just implementing the contract, that send_email method returns an instance of EmailReceipt disregarding any domain logic so that it can be used downstream by other classes. At the moment, it is just 2 services with 10 methods in total but it is growing and I would rather have a tool or a library generate it for me.
So I am wondering if there is a tool or a library which could do it or something close to it, ideally with this interface.
mocked_service=mocker.mock_class(service)
# All methods of mocked_service return appropriate objects/types which can be used downstream.
If it is not possible in Python, are there other programming languages where this would be possible?
How does one properly structure a larger django website such as to retain testability and maintainability?
In the best django spirit (I hope) we started out by not caring too much about decoupling between different parts of our website. We did separate it into different apps, but those depend rather directly upon each other, through common use of model classes and direct method calls.
This is getting quite entangled. For example, one of our actions/services looks like this:
def do_apply_for_flat(user, flat, bid_amount):
assert can_apply(user, flat)
application = Application.objects.create(
user=user, flat=flat, amount=bid_amount,
status=Application.STATUS_ACTIVE)
events.logger.application_added(application)
mails.send_applicant_application_added(application)
mails.send_lessor_application_received(application)
return application
The function does not only perform the actual business process, no, it also handles event logging and sending mails to the involved users. I don't think there's something inherently wrong with this approach. Yet, it's getting more and more difficult to properly reason about the code and even test the application, as it's getting harder to separate parts intellectually and programmatically.
So, my question is, how do the big boys structure their applications such that:
Different parts of the application can be tested in isolation
Testing stays fast by only enabling parts that you really need for a specific test
Code coupling is reduced
My take on the problem would be to introduce a centralized signal hub (just a bunch of django signals in a single python file) which the single django apps may publish or subscribe to. The above example function would publish an application_added event, which the mails and events apps would listen to. Then, for efficient testing, I would disconnect the parts I don't need. This also increases decoupling considerably, as services don't need to know about sending mails at all.
But, I'm unsure, and thus very interested in what's the accepted practice for these kind of problems.
For testing, you should mock your dependencies. The logging and mailing component, for example, should be mocked during unit testing of the views. I would usually use python-mock, this allows your views to be tested independently of the logging and mailing component, and vice versa. Just assert that your views are calling the right service calls and mock the return value/side effect of the service call.
You should also avoid touching the database when doing tests. Instead try to use as much in memory objects as possible, instead of Application.objects.create(), defer the save() to the caller, so that you can test the services without having to actually have the Application in the database. Alternatively, patch out the save() method, so it won't actually save, but that's much more tedious.
Transfer some parts of your app to different microservices. This will make some parts of your app focused on doing one or two things right (e.g. event logging, emails). Code coupling is also reduced and different parts of the site can be tested in isolation as well.
The microservice architecture style involves developing a single application as a collection of smaller services that communicates usually via an API.
You might need to use a smaller framework like Flask.
Resources:
For more information on microservices click here:
http://martinfowler.com/articles/microservices.html
http://aurelavramescu.blogspot.com/2014/06/user-microservice-python-way.html
First, try to brake down your big task into smaller classes. Connect them with usual method calls or Django signals.
If you feel that the sub-tasks are independent enough, you can implement them as several Django applications in the same project. See the Django tutorial, which describes relation between applications and projects.
I am dealing with a python application that consists of multiple distributed lightweight components that communicate using RabbitMQ & Kombu.
A component listens on two queues and can receive multiple message types on each queue. Subclasses can override how each message type is processed by registering custom handlers.
All this works fine.
I now have the added requirement that each component must have a basic REST/HTML interface. The idea being you point your browser at the running component and get realtime information on what it is currently doing (what messages it is processing, cpu usage, state info, log, etc.)
It needs to be lightweight, so after some research I have settled on Flask (but am open to suggestions). In pseudocode this means taking:
class Component:
Queue A
Queue B
...
def setup(..):
# connect to the broker & other initialization
def start(..):
# start the event loop and wait for work
def handle_msg_on_A(self,msg):
# dispatch a msg to a handler depending on the msg type
def handle_msg_on_B(self,msg):
...
...
and adding a number of view methods:
#app.route('/')
def web_ui(self):
# render to a template
#app.route('/state')
def get_state(self):
# REST method to return some internal state info as JSON
...
However, bolting a web UI onto a class like this breaks SOLID principles and brings problems with inheritance (a subclass may want to display more/less information). Decorators are not inherited so every view method would need to be explicitly overridden and redecorated. Maybe using a mixin + reflection could work somehow but it feels hackish.
Instead, using composition could work: put the web stuff in a separate class that delegates the url routes to a fixed, predefined set of polymorphic methods on the nested component.
This way components remain unaware of Flask at the cost of some loss in flexibility (the set of available methods is fixed).
I have now discovered Flask blueprints and Application Dispatching and it looks like they could bring a better, more extensible solution. However, I have yet to wrap my head around them.
I feel like I am missing a design pattern here and hopefully somebody with more flask-fu or experience with this type of problem can comment.
Something else was quietly introduced in Flask 0.7 that might be of interest to you - Pluggable Views. These are class based rather than function based endpoints - so you can use the dispatch_request method to manage your state transitions (only overriding it when needed).
The benefit of doing it this way, as opposed to using Application Dispatching, is that you get url_for support all across your application (as opposed to having to hard code in URLs that cross application boundaries.) You'll have to decide if this is something that is likely to be an issue for your application.
In pseudo-code:
# File: Components.py
from flask.views import View
class Component(View):
# Define your Component-specific application logic here
dispatch_request(self, *url_args, **url_kwargs):
# Define route-specific logic that all Components should have here.
# Call Component-specific methods as necessary
class Tool_1(Component):
pass
class Tool_2(Component):
# Override methods here
# File: app.py
from flask import Flask
from yourapplication import Tool_1, Tool_2
app = Flask()
# Assuming you want to pass all additional parameters as one argument
app.add_url_rule("/tool_1/<path:options>", "tool1", view_func=Tool_1.as_view())
# Assuming you want to pass additional parameters separately
tool_2_view = Tool_2.as_view()
app.add_url_rule("/tool_2/", "tool2", view_func=tool_2_view )
app.add_url_rule("/tool_2/<option>", "tool2", view_func=tool_2_view)
app.add_url_rule("/tool_2/<option>/<filter>", "tool2", view_func=tool_2_view)
You can add blueprints to the mix if you have a series of components that are all logically connected together and you don't want to have to remember to put /prefix in front of each one's add_url_rule call. But if you just have a series of components that are mostly independent of each other, this is the pattern I'd use*.
*. On the other hand, if they need to be isolated from each other I'd use the Application Dispatch pattern recommended in the docs.
I'm wondering how to go about implementing a macro recorder for a python gui (probably PyQt, but ideally agnostic). Something much like in Excel but instead of getting VB macros, it would create python code. Previously I made something for Tkinter where all callbacks pass through a single class that logged actions. Unfortunately my class doing the logging was a bit ugly and I'm looking for a nicer one. While this did make a nice separation of the gui from the rest of the code, it seems to be unusual in terms of the usual signals/slots wiring. Is there a better way?
The intention is that a user can work their way through a data analysis procedure in a graphical interface, seeing the effect of their decisions. Later the recorded procedure could be applied to other data with minor modification and without needing the start up the gui.
You could apply the command design pattern: when your user executes an action, generate a command that represents the changes required. You then implement some sort of command pipeline that executes the commands themselves, most likely just calling the methods you already have. Once the commands are executed, you can serialize them or take note of them the way you want and load the series of commands when you need to re-execute the procedure.
Thinking in high level, this is what I'd do:
Develop a decorator function, with which I'd decorate every event-handling functions.
This decorator functions would take note of thee function called, and its parameters (and possibly returning values) in a unified data-structure - taking care, on this data structure, to mark Widget and Control instances as a special type of object. That is because in other runs these widgets won't be the same instances - ah, you can't even serialize a toolkit widget instances, be it Qt or otherwise.
When the time comes to play a macro, you fill-in the gaps replacing the widget-representating object with the instances of the actually running objects, and simply call the original functions with the remaining parameters.
In toolkits that have an specialized "event" parameter that is passed down to event-handling functions, you will have to take care of serializing and de-serializing this event as well.
I hope this can help. I could come up with some proof of concept code for that (although I am in a mood to use tkinter today - would have to read a lot to come up with a Qt4 example).
An example of what you're looking for is in mayavi2. For your purposes, mayavi2's "script record" functionality will generate a Python script that can then be trivially modified for other cases. I hear that it works pretty well.
Im working on optimizing my design in terms of mvc, intent on simplifying the api of the view which is quite nested even though Iv built composite widgets(with there own events and/ pubsub messages) in an attempt to simpify things.
For example I have a main top level gui class a wxFrame which has a number of widgets including a notebook, the notebook contains a number of tabs some of which are notebooks that contain composite widgets. So to call the methods of one of these composite widgets from the controller I would have
self.gui.nb.sub_nb.composite_widget.method()
To create a suitable abstraction for the view I have created references to these widgets (whose methods need to be called in the controller) in the view like so
self.composite_widget = self.nb.sub_nb.composite_widget()
so that in the controller the call is now simplified to
self.gui.composite_widget.method()
Is this an acceptable way to create an abstraction layer for the gui?
Well that's definitely one way to handle the issue. I tend to use pubsub to call methods the old fashioned way though. Some people like pyDispatcher better than pubsub. The main problem with using multi-dot method calling is that it's hard to debug if you have to change a method name.