[background below]
I've got my data modelled out in SQLObject in Python on the back-end. Right now I'm converting an SQLObject to a dict, and grabbing all the keys from the dict and then exporting that as a JSON document (so just a JavaScript array). I was planning on doing something like:
Spine.Model.extend({
fromList: function(name, list){
var model = Spine.Model.setup(name, list);
return model;
}
});
Is this a good idea? Does Spine already provide this functionality? Is this the best way to extend the Spine.Model class?
BACKGROUND:
So. I've got a Python application that I've been porting from a GUI app to a web app using Flask.
I'm to the point where I'm doing the view part and realized that it would make a lot of sense to use a JavaScript framework for manipulation of the data/controlling the app/etc.
After a bunch of research I've settled on Spine (the API made the most sense to me on the first read, plus the author wrote the O'Reilly book JavaScript Web Applications so there's a decent reference).
Since I've already got my data modelled out on the backend, I'd like to export that configuration and automate the creation of the Spine models using this so that the data they're recording is always in sync (this way if I change my back-end model, the front-end automatically changes at next page load).
It looks like you are thinking about dynamically creating models with client side JavaScript based on a model in your database with a python dictionary -> JSON as the linking representation between the two.
This sounds complicated and I really don't have an answer. It may even be unnecessarily complicated :), but that's for you to decide. I do however, have an alternative solution.
Why not generate the Spine models from Python dynamically and just serve the static files? Then all you have to do is write a python program that outputs valid code for a spine model in JavaScript or CoffeeScript (perhaps as part of your build process if models change often or simply as necessary during development).
Again, this may be unnecessarily complicated if you do not have a LARGE amount of models, that change regularly to generate. Even then perhaps all you need is a verification tool that verifies your backend data is modeled correctly in Spine, and just hand code everything. It's fairly easy to hand code the models to contain the data they need http://spinejs.com/docs/models
Really, setting up the actual "data" in the spine model is as simple as #configure "Contact", "first_name", "last_name" Now the spine model has a first_name and a last_name...
Make sure you didn't put on your Complicator Gloves, before you came up with this idea :)
Related
I have some text fields in my Django model that are filled by a script, with values in English (the list of values is known).
But the app is actually made for Russian clients only. I'd like to translate those fields into Russian, and here comes a little question. These values are taken from an API response, which means I should check the value to translate it. What's faster: to check and translate fields in template or to make extra fields and translate strings in the Python script?
The problem is overhead of compiling Templates when rendering. So the more complicated the template gets (method calls etc), the performance tends to get slow (like py files are converted to pyc). Django has template caching but that also is limited (I don't know how much). I have faced performance issue because of lot of logic in templates. Plus its always good to have a dumb client (template). I will prefer the Python approach because of the idea to keep client thin and not because of the performance gap. Plus if tomorrow you need to add one more language then changing templates is always going to be difficult then server.
I've recently started working with Django. I'm working on an existing Django/Python based site. In particular I'm implementing some functionality to create and display a PDF document when a particular URL is hit. I have an entry in the app's urls file that routes to a function in the views file and the PDF generation is working fine.
However, the view function is pretty big and I want to extract the code out somewhere to keep my view as thin as possible, but I'm not sure of the best/correct approach. I'll probably need to generate other PDFs in due course so would it make sense to create a 'pdfs' app and put code in there? If so, should it go in a model or view?
In a PHP/CodeIgniter environment for example I would put the code into a model, but models seem to be closely linked to database tables in Django and I don't need any db functionality for this.
Any pointers/advice from more experienced Django users would be appreciated.
Thanks
If you plan to scale your project, I would suggest moving it to a separate app. Generally speaking, generating PDFs based on an url hit directly is not the best thing to do performance-wise. Generating a PDF file is pretty heavy on you server, so if multiple people do it at the same time, the performance of your system will suffer.
As a first step, just put it in a separate class, and execute that code from the view. At some point you will probably want to do some permission checks etc - that stays in the view, while generation of the PDF itself will be cleanly separated.
Once you test your code, scale etc - then you can substitute that one line call in the view into putting the PDF generation in a queue and only pulling it once it's done - that will allow you to manage your computing powers better.
Yes you can in principle do it in an app (the concept of reusable apps is the basis for their existence)
However not many people do it/not many applications require it. It depends on how/if the functionality will be shared. In other words there must be a real benefit.
The code normally goes in both the view/s and in the models (to isolate code and for the model managers)
i'm working on a project (written in Django) which has only a few entities, but many rows for each entity.
In my application i have several static "reports", directly written in plain SQL. The users can also search the database via a generic filter form. Since the target audience is really tech-savvy and at some point the filter doesn't fit their needs, i think about creating a query language for my database like YQL or Jira's advanced search.
I found http://sourceforge.net/projects/littletable/ and http://www.quicksort.co.uk/DeeDoc.html, but it seems that they only operate on in-memory objects. Since the database can be too large for holding it in-memory, i would prefer that the query is translated in SQL (or better a Django query) before doing the actual work.
Are there any library or best practices on how to do this?
Writing such a DSL is actually surprisingly easy with PLY, and what ho—there's already an example available for doing just what you want, in Django. You see, Django has this fancy thing called a Q object which make the Django querying side of things fairly easy.
At DjangoCon EU 2012, Matthieu Amiguet gave a session entitled Implementing Domain-specific Languages in Django Applications in which he went through the process, right down to implementing such a DSL as you desire. His slides, which include all you need, are available on his website. The final code (linked to from the last slide, anyway) is available at http://www.matthieuamiguet.ch/media/misc/djangocon2012/resources/compiler.html.
Reinout van Rees also produced some good comments on that session. (He normally does!) These cover a little of the missing context.
You see in there something very similar to YQL and JQL in the examples given:
groups__name="XXX" AND NOT groups__name="YYY"
(modified > 1/4/2011 OR NOT state__name="OK") AND groups__name="XXX"
It can also be tweaked very easily; for example, you might want to use groups.name rather than groups__name (I would). This modification could be made fairly trivially (allow . in the FIELD token, by modifying t_FIELD, and then replacing . with __ before constructing the Q object in p_expression_ID).
So, that satisfies simple querying; it also gives you a good starting point should you wish to make a more complex DSL.
I've faced exactly this problem - a large database which needs searching. I made some static reports and several fancy filters using django (very easy with django) just like you have.
However the power users were clamouring for more. I decided that there already was a DSL that they all knew - SQL. The question was how to make it secure enough.
So I used django permissions to give the power users permission to make SQL queries in a new table. I then made a view for the not-quite-so-power users to use these queries. I made them take optional parameters. The queries were run using Python's lower level DB-API which django is using under the hood for its ORM anyway.
The real trick was opening a read only database connection to run these queries just to make sure that no updates were ever run. I made a read only connection by creating a different user in the database with lower permissions and opening a specific connection for that in the view.
TL;DR - SQL is the way to go!
Depending on the form of your data, the types of queries your users need to use, and the frequency that your data is updated, an alternative to the pure SQL solution suggested by Nick Craig-Wood is to index your data in Solr and then run queries against it.
Solr is an added layer of complexity (configuration, data synchronization) but it is super-fast, can handle large datasets, and provides a (relatively) intuitive query language.
You could write your own SQL-ish language using pyparsing, actually. There is even pretty verbose example you could extend.
I'm writing a psychology experiment in Python, and I need to make it available as a web app. I've already got the Python basically working as a command-line program. On the recommendation of a CS buddy I'm using Django with a sqlite db. This is also working, my development server is up and the database tables are ready and waiting.
What I don't understand is how to glue these two pieces together. The Django tutorials I've found are all about building things like blogs, messaging systems or polls; systems based on sending form data. I can't do that, because I'm timing responses to presented stimuli in milliseconds - I need to build an interactive app that doesn't rely (during the exercise) on form POST data or URL changes.
In short: I have no idea how to go from my simple command line program to a "real time" interactive web application.
Maximum kudos for links to relevant tutorials! I will also really appreciate a high-level explanation of the concept I'm missing here.
(FYI, I asked a previous question (choice of database) about this project here)
You are going to need to use HTML/Javascript, and then you can collect and send the results to the server. The results can get gamed though, as the code for the exercise is going to be client side.
Edit: I recommend a Javascript library, jQuery: http://docs.jquery.com/Tutorials
Edit 2:
I'll be a bit more specific, you need at least two models in Django, Exercise, and ExecutedExercise. Exercise will have fields with its name, number, etc., generic data for each exercise. ExecutedExercise will have two fields, a foreign key to Exercise, and a field to store how long it took to finish.
Now in Javascript, you're going to time the exercises, and then post them to a Django view that will handle the data storage. How to post them? You could use http://api.jquery.com/jQuery.post/ Create the data string, data = { e1: timingE1, e2: timingE2 } and post it to the view. You can handle the POST parameters in that view, create a ExecutedExercise object (you'll have the time it took for each exercise) and save them.
I'm going back over an old project where I added preprocessor functionality to Essence' and I realised that my previous solution of writing a domain specific language and associated lexer/parser was overkill.
Instead I just need to be able to embed dynamic language code into the file, isolate it at runtime, eval and insert the results. In other words very similar to the PHP model of inserting dynamic code into HTML. I'd rather not use PHP as Python is much easier to distribute as part of a larger project (IronPython or Jython)
So the question goes, how best to implement something like the following:
<code>Python goes here</code>
Lots of essence <code>Python</code> prime code goes here
I don't want to have to alter the structure of the Essence' file (if I remove all the code blocks everything left should be able to be syntactically correct. It needs to be able to insert text in place of a code block like PHP.
Finally security wise I'm not bothered about code injection, as it would be the user themselves choosing the file to execute although if there were security benefits to one model over another with no extra costs that would obviously be good.
Cheers in advance
Your best bet is to use one of the already made (and battle tested) Templating Engines. The two big ones that I've used are Mako, and Cheetah. They allow you to embed code right in the page, and are mostly used as the View in an MVC architecture.
If you feel that using one of those engines is overkill for your project, here is a small tutorial on how to implement basic templates yourself. Keep in mind that the example will need to be modified to suit your particular project/needs.