The official tutorial for SQLAlchemy provides examples that make use of the session system, such as the following:
>>> from sqlalchemy.orm import sessionmaker
>>> Session = sessionmaker(bind=engine)
Many unofficial tutorials also make use of sessions, however some don't make use of them at all, instead opting for whatever one would call this approach:
e = create_engine('sqlite:///company.db')
conn = e.connect()
query = conn.execute("SELECT first_name FROM employee")
Why are sessions needed at all when this much simpler system seems to do the same thing? The official documentation doesn't make it clear why this would be necessary, as far as I've seen.
There is one particularily relevant section in the official SQLAlchemy documentation:
A web application is the easiest case because such an application is already constructed around a single, consistent scope - this is the request, which represents an incoming request from a browser, the processing of that request to formulate a response, and finally the delivery of that response back to the client. Integrating web applications with the Session is then the straightforward task of linking the scope of the Session to that of the request. The Session can be established as the request begins, or using a lazy initialization pattern which establishes one as soon as it is needed. The request then proceeds, with some system in place where application logic can access the current Session in a manner associated with how the actual request object is accessed. As the request ends, the Session is torn down as well, usually through the usage of event hooks provided by the web framework. The transaction used by the Session may also be committed at this point, or alternatively the application may opt for an explicit commit pattern, only committing for those requests where one is warranted, but still always tearing down the Session unconditionally at the end.
...and...
Some web frameworks include infrastructure to assist in the task of
aligning the lifespan of a Session with that of a web request. This
includes products such as Flask-SQLAlchemy, for usage in conjunction
with the Flask web framework, and Zope-SQLAlchemy, typically used with
the Pyramid framework. SQLAlchemy recommends that these products be
used as available.
Unfortunately I still can't tell whether I need to use sessions, or if the final paragraph is implying that certain implementation such as Flask-SQLAlchemy are already managing sessions automatically.
Do I need to use sessions? Is there a significant risk to not using sessions? Am I already using sessions because I'm using Flask-SQLAlchemy?
Like you pointed out, sessions are not strictly necessary if you only construct and execute queries using plain SQLAlchemy Core. However, they provide higher layer of abstraction required to take advantage of SQLAlchemy ORM. Session maintains a graph of modified models and makes sure the changes are efficiently and consistently flushed to the database when necessary.
Since you already use Flask-SQLAlchemy, I don't see a reason to avoid sessions even if you don't need the ORM features. The extension handles all the plumbing necessary for isolating requests, so that you don't have to reinvent the wheel and can focus on your application code.
You need sessions if you want to use ORM features including:
Changing some attribute on an object returned by a query and having it easily written back to the database
create an object using normal python constructors and have it easily sent back to the database
Have relationships on objects. So, for example if you had a blog, and you had a post object, be able to write post.author to access the User object responsible for that post.
I'll note also that even if you use Session, you don't actually need sessionmaker. In a web application, you probably want it, but you can use sessions like this if you're looking for simplicity:
from sqlalchemy import create_engine
from sqlalchemy.orm import Session
engine = create_engine(...)
session = Session(bind = engine)
Again, I think you probably want sessionmaker in your app, but you don't need it for sessions to work
Related
I have pretty simple model. User defines url and database name for his own Postgres server. My django backend fetches some info from client DB to make some calculations, analytics and draw some graphs.
How to handle connections? Create new one when client opens a page, or keep connections alive all the time?(about 250-300 possible clients)
Can I use Django ORM or smth like SQLAlchemy? Or even psycopg library?
Does anyone tackle such a problem before?
Thanks
In your case, I would rather go with Django internal implementation and follow Django ORM as you will not need to worry about handling connection and different exceptions that may arise during your own implementation of DAO model in your code.
As per your requirement, you need to access user database, there still exists overhead for individual users to create db and setup something to connect with your codebase. So, I thinking sticking with Django will be more profound.
I am using sqlalchemy (with MySQL as database) with cherrypy. I have created an engine which is application-wide. I read from SqlAlchemy Sessions Doc that sessions are not thread-safe. Does that mean I should create a separate session for each REST request? If so, does session use default connection pool in sqlalchemy (with pool_size=5 and max_overflow=10)? Does this mean that 15(pool_size + max_overflow) concurrent requests can be handled without a problem? Also, does a single connection-pool belong to sqlalchemy Engine or a single Session object?
It would be better to provide the part of your code. But first of all should read Official doc about Thread-local Sessions
As I can understand we need to use separate session for each thread. When I tried to make my first applications I didn't do like this and sometimes I got errors.
There is a SQLA tool for cherrpy, you can use this tool or write more simple for yourself.
If you use Django you can simply create an instance of one of your models, fill it with data and call save() on it and it will be saved to the database. You don't have to pass in a "connection" parameter or do anything special. Also, your view are just simple callables so there seems to be no magic hidden. I.e. this works:
from django.http import HttpResponse
from models import MyModel
def a_simple_view(request):
instance = MyModel(some_field="Foobar")
instance.save()
return HttpResponse(<html><body>Jep, just saved</body></html>)
So the question is: How does my freshly created model instance get a database connection so save itself? And as a followup: Is this is a sensible way to do it?
How does my freshly created model instance get a database connection
so save itself?
Essentially, each model has a Manager that knows the database connection. In reality it is a bit more complicated, because the manager delegates the connection creation and management (to database routers and connection managers).
Is this is a sensible way to do it?
Well, that's a question that cannot be answered without context, really. In the context of what a Django model is, this is the sensible approach because as a developer you do not have to concern yourself with connection management.
If you're asking whether Django takes a sensible approach to connection management, and you are worried it may not, here's what the Django documentation has to say about it:
Django opens a connection to the database when it first makes a
database query. It keeps this connection open and reuses it in
subsequent requests. Django closes the connection once it exceeds the
maximum age defined by CONN_MAX_AGE or when it isn’t usable any
longer.
and:
Since each thread maintains its own connection, your database must
support at least as many simultaneous connections as you have worker
threads.
So now the question is: when and how many threads are created? This depends on the server used. E.g. the development server starts a new thread for every request, whereas gunicorn reuses threads across requests.
It is hard to describe how django does this but you can probably learn more about this by looking at the source code of django and specifically on django.db module.
There's a DB abstraction layer so that Django works with many databases such as sqlite, mysql and postgresql. There's connection pooling so that django reuses the connection on subsequent queries. All these things are used by a django model when a db query is to be run. In the end it is not simple and you should check the source code to find detailed answers.
The identity map and unit of work patterns are part of the reasons sqlalchemy is much more attractive than django.db. However, I am not sure how the identity map would work, or if it works when an application is configured as wsgi and the orm is accessed directly through api calls, instead of a shared service. I would imagine that apache would create a new thread with its own python instance for each request. Each instance would therefore have their own instance of the sqlalchemy classes and not be able to make use of the identity map. Is this correct?
I think you misunderstood the identity map pattern.
From : http://martinfowler.com/eaaCatalog/identityMap.html
An Identity Map keeps a record of all
objects that have been read from the
database in a single business
transaction.
Records are kept in the identity map for a single business transaction. This means that no matter how your web server is configured, you probably will not hold them for longer than a request (or store them in the session).
Normally, you will not have many users taking part in a single business transation. Anyway, you probably don't want your users to share objects, as they might end up doing things that are contradictory.
So this all depends on how you setup your sqlalchemy connection. Normally what you do is to manage each wsgi request to have it's own threadlocal session. This session will know about all of the goings-on of it, items added/changed/etc. However, each thread is not aware of the others. In this way the loading/preconfiguring of the models and mappings is shared during startup time, however each request can operate independent of the others.
I'm writing a simple app with AppEngine, using Python. After a successful insert by a user and redirect, I'd like to display a flash confirmation message on the next page.
What's the best way to keep state between one request and the next? Or is this not possible because AppEngine is distributed? I guess, the underlying question is whether AppEngine provides a persistent session object.
Thanks
Hannes
No session support is included in App Engine itself, but you can add your own session support.
GAE Utilities is one library made specifically for this; a more heavyweight alternative is to use django sessions through App Engine Patch.
The ways to reliable keep state between requests are memcache, the datastore or through the user (cookies or post/get).
You can use the runtime cache too, but this is very unreliable as you don't know if a request will end up in the same runtime or the runtime can drop it's entire cache if it feels like it.
I really wouldn't use the runtime cache except for very specific situations, for example I use it to cache the serialization of objects to json as that is pretty slow and if the caching is gone I can regenerate the result easily.