Is it possible to preform insert, update, delete queries without session.add, session.commit just using orm model classes and engine.
Something like:
us = User(name='john')
engine.execute(us)
Clearly documented in the Sqlalchemy docs:
http://docs.sqlalchemy.org/en/latest/#binding-metadata-to-an-engine-or-connection
Another option option is using contextual sessions:
http://docs.sqlalchemy.org/en/latest/#lifespan-of-a-contextual-session
Related
I have a PostgreSQL database with existing tables. I wish to :
Create a set of Python models (plain classes, SQLAlchemy models or other) based on the existing database
Then manage changes in these models with a migrations tool.
The second part I think is easy to achieve as long as I manage to get my initial schema created. How can this be achieved?
So if someone is willing to use SQLAlchemy I found these two solutions:
Straight with SQLAlchemy reflection and automapping
With sqlacodegen
In Django, you can extract a plain-text SQL query from a QuerySet object like this:
queryset = MyModel.objects.filter(**filters)
sql = str(queryset.query)
In most cases, this query itself is not valid - you can't pop this into a SQL interface of your choice or pass it to MyModel.objects.raw() without exceptions, since quotations (and possibly other features of the query) are not performed by Django but rather by the database interface at execution time. So at best, this is a useful debugging tool.
Coming from a data science background, I often need to write a lot of complex SQL queries to aggregate data into a reporting format. The Django ORM can be awkward at best and impossible at worst when queries need to be very complex. However, it does offer some security and convenience with respect to limiting SQL injection attacks and providing a way to dynamically build a query - for example, generating the WHERE clause for the query using the .filter() method of a model. I want to be able to use the ORM to generate a base data set in the form of a query, then take that query and use it as a subquery/CTE in a larger query that handles more complex logic. For example:
queryset = MyModel.objects.filter(**filters)
sql = str(queryset.query)
more_complex_query = f"""
with filtered_table as ({sql})
select
*
/* add other stuff */
from
filtered_table
"""
results = MyModel.objects.raw(more_complex_query)
In this case, the ORM generates a query that can be used to filter the base table, then the CTE/raw sql can take that result and do whatever calculations need to be done with a tool that is more common among people working with data (SQL) than the Django ORM, while still getting the ORM benefits of stripping bad actors out.
However, this method requires a way to generate a usable SQL query from a QuerySet object. I've found a workaround for postgres databases using the psycopg2 cursor:
from django.db import connections
# Whatever the key is in your settings.DATABASES for the reporting db
WAREHOUSE_CONNECTION_NAME = 'default'
# Get the Query object and separate it into the query and params
filtered_table_query = MyModel.objects.filter(**filters).query
raw_query, params = filtered_table_query.sql_with_params()
# Create a cursor from the relevant connection
cursor = connections[WAREHOUSE_CONNECTION_NAME].cursor()
# Call .mogrify() on the query/params to get an executable query string
usable_sql = cursor.mogrify(raw_query, params)
cursor.execute(usable_sql) # This works
cursor.fetchall() # This works
# Have not tried this yet
MyModel.objects.raw(usable_sql)
# Or this
wrapper_query = f"""
with base_table as ({usable_sql})
select
*
from
base_table
"""
cursor.execute(wrapper_query)
# or
MyModel.objects.raw(wrapper_query)
This method is dependent on the psycopg2 cursor method .mogrify() - I am not sure if this works for other back ends or if the DB API 2.0 spec takes care of that.
Other people have suggested creating a view in the database and then using an unmanaged Django model on top of the view, but I think this does not really work when your queries are dynamic in nature, i.e. need to be filtered differently based on some user input, since often the fields a user wants to filter on are not present in the result set after some aggregation.
So overall, I have two questions:
Is there a reason why Django does not let you extract a usable SQL query as a standard offering?
What other methods do people use when the ORM makes your elegant SQL into an ugly mess?
The Django developers tend to frown on features that aren't cross-compatible across all the databases they support. I can only imagine that one of the supported database engines doesn't have this capability and so they don't provide it as a standard, documented feature of the ORM.
But that's just a guess. You'd really have to ask one of the devs :)
So I have two table in a one-to-many relationship. When I make a new row of Table1, I want to populate Table2 with the related rows. However, this population actually involves computing the Table2 rows, using data in other related tables.
What's a good way to do that using the ORM layer? That is, assuming that that the Table1 mappings are created through the ORM, where/how should I call the code to populate Table2?
I thought about using the after_insert hook, but i want to have a session to pass to the population method.
Thanks.
You can use the before_flush or after_flush hook, it provides a session. You then check session.new objects for newly created models (tip: use isinstance(object, ModelClass)) and do your work here.
In fact, SQLAlchemy recommends before_flush for general on flush changes.
Mapper-level flush events only allow very limited operations, on attributes local to the row being operated upon only, as well as allowing any SQL to be emitted on the given Connection. Please read fully the notes at Mapper-level Events for guidelines on using these methods; generally, the SessionEvents.before_flush() method should be preferred for general on-flush changes.
After asking around in #sqlalchemy IRC, it was pointed out that this could be done using ORM-level relationships in an before_flush event listener.
It was explained that when you add a mapping through a relationship, the foreign key is automatically filled on flush, and the appropriate insert statement generated by the ORM.
How do I use the Werkzeug framework without any ORM like SQLAlchemy? In my case, it's a lot of effort to rewrite all the tables and columns in SQLAlchemy from existing tables & data.
How do I query the database and make an object from the database output?
In my case now, I use Oracle with cx_Oracle. If you have a solution for MySQL, too, please mention it.
Thanks.
SQLAlchemy supports reflection so you don't have to do that. Take a look at the autoload parameter of Table, you can even make this work with the ORM.
Is it a problem to use normal DB API, issue regular SQL queries, etc? cx_Oracle even has connection pooling biolt in to help you manage connections.
maybe this is what i looking for http://www.sqlalchemy.org/trac/wiki/SqlSoup
and ht*p://spyced.blogspot.com/2006/04/introducing-sqlsoup.html
so i don't have to declare the table to get the object
rp = db.bind.execute('select * from mupp')
a = rp.fetchall()
a[0].name
that's great...thanks for all inspiring response
Has anyone used SQLAlchemy in addition to Django's ORM?
I'd like to use Django's ORM for object manipulation and SQLalchemy for complex queries (like those that require left outer joins).
Is it possible?
Note: I'm aware about django-sqlalchemy but the project doesn't seem to be production ready.
What I would do,
Define the schema in Django orm, let it write the db via syncdb. You get the admin interface.
In view1 you need a complex join
def view1(request):
import sqlalchemy
data = sqlalchemy.complex_join_magic(...)
...
payload = {'data': data, ...}
return render_to_response('template', payload, ...)
I've done it before and it's fine. Use the SQLAlchemy feature where it can read in the schema so you don't need to declare your fields twice.
You can grab the connection settings from the settings, the only problem is stuff like the different flavours of postgres driver (e.g. with psyco and without).
It's worth it as the SQLAlchemy stuff is just so much nicer for stuff like joins.
I don't think it's good practice to use both. You should either:
Use Django's ORM and use custom SQL where Django's built-in SQL generation doesn't meet your needs, or
Use SQLAlchemy (which gives you finer control at the price of added complexity).
Of course, if you need Django's admin, then the first of these approaches is recommended.
Jacob Kaplan-Moss admitted to typing "import sqlalchemy" from time to time. I may write a queryset adapter for sqlalchemy results in the not too distant future.
Nowadays you can use Aldjemy. Consider using this tutorial.