Setting up Pyramid to use MySQL raw instead of SQLAlchemy - python

We're trying to set up a Pyramid project that will use MySQL instead of SQLAlchemy.
My experience with Pyramid/Python is limited, so I was hoping to find a guide online. Unfortunately, I haven't been able to find anything to push us in the right direction. Most search results were for people trying to use raw SQL/MySQL commands with SQLAlchemy (many were re-posted links).
Anyone have a useful tutorial on this?

Pyramid at its base does not assume that you will use any one specific library to help you with your persistence. In order to make things easier, then, for people who DO wish to use libraries such as SQLALchemy, the Pyramid library contains Scaffolding, which is essentially some auto-generated code for a basic site, with some additions to set up items like SQLAlchemy or a specific routing strategy. The pyramid documentation should be able to lead you through creating a new project using the "pyramid_starter" scaffolding, which sets up the basic site without SQLAlchemy.
This will give you the basics you need to set up your views, but next you will need to add code to allow you to connect to a database. Luckily, since your site is just python code, learning how to use MySQL in Pyramid is simply learning how to use MySQL in Python, and then doing the exact same steps within your Pyramid project.
Keep in mind that even if you'd rather use raw SQL queries, you might still find some usefulness in SQLAlchemy. At it's base level, SQLAlchemy simply wraps around the DBAPI calls and adds in useful features like connection pooling. The ORM functionality is actually a large addition to the tight lower-level SQLAlchemy toolset.

sqlalchemy does not make any assumption that you will be using it's orm. If you wish to use plain sql, you can do so, with nothing more than what sqlalchemy already provides. For instance, if you followed the recipe in the cookbook, you would have access to the sqlalchemy session object as request.db, your handler would look something like this:
def someHandler(request):
rows = request.db.execute("SELECT * FROM foo").fetchall()

The Quick Tutorial shows a Pyramid application that uses SQL but not SQLAlchemy. It uses SQLite, but should be reasonably easy to adapt for MySQL.

Related

Cleanest way to make an ORM for neo4j + sql in python flask? One model over 2 databases

How can I create one model that talks to two databases in Flask, where one is, say, sqlite, and the other is specifically neo4j?
I'd like to have login and password stuff in a traditional db, and keep other graphy information in neo4j. I'm told neo4j is bad for things that need large graph traversals. Perhaps I'm wrong in needing this, but I have an instance where I'd like to say something like...
"return a dict(person.x,person.y,person.z) from all nodes where type==person", and then feed that into the view of my index page.
I've seen related questions about ORMs with neo4j:
ORM with Graph-Databases like Neo4j in Python
...and this about multiple DBs in Flask:
http://packages.python.org/Flask-SQLAlchemy/binds.html
Specifically, I see this taking the form of my create statement writing to sqlite db connection and then writing a key from there to additional relational information in neo4j.
I've recently released an OGM (Object-Graph Mapping) module for py2neo (http://book.py2neo.org/en/latest/ogm.html). This might help with what you're trying to do.
Otherwise, you could also look at neomodel (https://github.com/robinedwards/neomodel). It's written for Django but should be usable in Flask too.
I don't know about mixed backend models, but I think depending on your user count, you can use neo4j for your users, too. If you put the user nodes into an index, you can get all users without searching the graph.
If you find that this actually is a bottleneck, migrating it to a split storage should not be too difficult.
It is not that hard to adapt neo4j-driver and py2neo to use eg. Flask-Login.
I have use py2neo to that and worked well, but migrated now to neo4j-driver
Downside is that i did not manage to get it working with eg SQLalchemy etc.
Using a double backend solution is not a problem, in an earlier project i have used SQLalchemy with SQLite3 and PostgreSQL, Neo4j and redis together.
Using that, i have found no issues other than some design issues.

Building a DSL query language

i'm working on a project (written in Django) which has only a few entities, but many rows for each entity.
In my application i have several static "reports", directly written in plain SQL. The users can also search the database via a generic filter form. Since the target audience is really tech-savvy and at some point the filter doesn't fit their needs, i think about creating a query language for my database like YQL or Jira's advanced search.
I found http://sourceforge.net/projects/littletable/ and http://www.quicksort.co.uk/DeeDoc.html, but it seems that they only operate on in-memory objects. Since the database can be too large for holding it in-memory, i would prefer that the query is translated in SQL (or better a Django query) before doing the actual work.
Are there any library or best practices on how to do this?
Writing such a DSL is actually surprisingly easy with PLY, and what ho—there's already an example available for doing just what you want, in Django. You see, Django has this fancy thing called a Q object which make the Django querying side of things fairly easy.
At DjangoCon EU 2012, Matthieu Amiguet gave a session entitled Implementing Domain-specific Languages in Django Applications in which he went through the process, right down to implementing such a DSL as you desire. His slides, which include all you need, are available on his website. The final code (linked to from the last slide, anyway) is available at http://www.matthieuamiguet.ch/media/misc/djangocon2012/resources/compiler.html.
Reinout van Rees also produced some good comments on that session. (He normally does!) These cover a little of the missing context.
You see in there something very similar to YQL and JQL in the examples given:
groups__name="XXX" AND NOT groups__name="YYY"
(modified > 1/4/2011 OR NOT state__name="OK") AND groups__name="XXX"
It can also be tweaked very easily; for example, you might want to use groups.name rather than groups__name (I would). This modification could be made fairly trivially (allow . in the FIELD token, by modifying t_FIELD, and then replacing . with __ before constructing the Q object in p_expression_ID).
So, that satisfies simple querying; it also gives you a good starting point should you wish to make a more complex DSL.
I've faced exactly this problem - a large database which needs searching. I made some static reports and several fancy filters using django (very easy with django) just like you have.
However the power users were clamouring for more. I decided that there already was a DSL that they all knew - SQL. The question was how to make it secure enough.
So I used django permissions to give the power users permission to make SQL queries in a new table. I then made a view for the not-quite-so-power users to use these queries. I made them take optional parameters. The queries were run using Python's lower level DB-API which django is using under the hood for its ORM anyway.
The real trick was opening a read only database connection to run these queries just to make sure that no updates were ever run. I made a read only connection by creating a different user in the database with lower permissions and opening a specific connection for that in the view.
TL;DR - SQL is the way to go!
Depending on the form of your data, the types of queries your users need to use, and the frequency that your data is updated, an alternative to the pure SQL solution suggested by Nick Craig-Wood is to index your data in Solr and then run queries against it.
Solr is an added layer of complexity (configuration, data synchronization) but it is super-fast, can handle large datasets, and provides a (relatively) intuitive query language.
You could write your own SQL-ish language using pyparsing, actually. There is even pretty verbose example you could extend.

ORM with Graph-Databases like Neo4j in Python

i wonder wether there is a solution (or a need for) an ORM with Graph-Database (f.e. Neo4j). I'm tracking relationships (A is related to B which is related to A via C etc., thus constructing a large graph) of entities (including additional attributes for those entities) and need to store them in a DB, and i think a graph database would fit this task perfectly.
Now, with sql-like DBs, i use sqlalchemyś ORM to store my objects, especially because of the fact that i can retrieve objects from the db and work with them in a pythonic style (use their methods etc.).
Is there any object-mapping solution for Neo4j or other Graph-DB, so that i can store and retrieve python objects into and from the Graph-DB and work with them easily?
Or would you write some functions or adapters like in the python sqlite documentation (http://docs.python.org/library/sqlite3.html#letting-your-object-adapt-itself) to retrieve and store objects?
Shameless plug... there is also my own ORM which you may also want to checkout: https://github.com/robinedwards/neomodel
It's built on top of py2neo, using cypher and rest API calls under hood, i.e no dependency on gremlin.
There are a couple choices in Python out there right now, based on databases' REST interfaces.
As I mentioned in the link #Peter provided, we're working on neo4django, which updates the old Neo4j/Django integration. It's a good choice if you need complex queries and want an ORM that will manage node indexing as well- or if you're already using Django. It works very similarly to the native Django ORM. Find it on PyPi or GitHub.
There's also a more general solution called Bulbflow that is supposed to work with any graph database supported by Blueprints. I haven't used it, but from what I've seen it focuses on domain modeling - Bulbflow already has working relationship models, for example, which we're still working on- but doesn't much support complex querying (as we do with Django querysets + index use). It also lets you work a bit closer to the graph.
Maybe you could take a look on Bulbflow, that allows to create models in Django, Flask or Pyramid. However, it works over a REST client instead of the python-binding provided by Neo4j, so perhaps it's not as fast as the native binding is.

Proper way to establish database connection in python

I have a script with several functions that all need to make database calls. I'm trying to get better at writing clean code rather than just throwing together scripts with horrible style. What is generally considered the best way to establish a global database connection that can be accessed anywhere in the script but is not susceptible to errors such as accidentally redefining the variable holding a connection. I'd imagine I should be putting everything in a module? Any links to actual code would be very useful as well. Thanks.
If you are working with Python and databases, you cannot afford not to look at SQLAlchemy:
SQLAlchemy is the Python SQL toolkit
and Object Relational Mapper that
gives application developers the full
power and flexibility of SQL.
It provides a full suite of well known
enterprise-level persistence patterns,
designed for efficient and
high-performing database access,
adapted into a simple and Pythonic
domain language.
I have built very complex databases with a surprisingly small amount of code (a few hundred lines). The schema definition is almost self-documenting, the objects used for the Object Relational Mapper are Plain Old Python Objects (i.e., what you already have), and the querying API is almost obvious. In addition, the documentation is excellent: many online examples, fully documented API, and an O'Reilly book which, while far from perfect, does take you from zero to dangerous in a few evenings.
If you don't want to use the Object Relational Mapper, you can always fall back to plain connections and literal SQL. Also, the code is portable and database independent (the same code will work with MySQL, Oracle, SQLite, and other database managers).
The Session object will automatically take care of the pooling (what you mention as your concern).
The best way to understand its power is probably to follow the tutorials obtained in the first result page of the Google query sqlalchemy tutorial.
Use a model system/ORM system.

Lightweight Object->Database in Python

I am in need of a lightweight way to store dictionaries of data into a database. What I need is something that:
Creates a database table from a simple type description (int, float, datetime etc)
Takes a dictionary object and inserts it into the database (including handling datetime objects!)
If possible: Can handle basic references, so the dictionary can reference other tables
I would prefer something that doesn't do a lot of magic. I just need an easy way to setup and get data into an SQL database.
What would you suggest? There seems to be a lot of ORM software around, but I find it hard to evaluate them.
SQLAlchemy's SQL expression layer can easily cover the first two requirements. If you also want reference handling then you'll need to use the ORM, but this might fail your lightweight requirement depending on your definition of lightweight.
SQLAlchemy offers an ORM much like django, but does not require that you work within a web framework.
From it's description, perhaps Axiom is a pythonic tool for this .
Seeing as you have mentioned sql, python and orm in your tags, are you looking for Django? Of all the web frameworks I've tried, I like this one the best. You'd be looking at models, specifically. This could be too fancy for your needs, perhaps, but that shouldn't stop you looking at the code of Django itself and learning from it.

Categories

Resources