Separate session objects for separate REST requests in SqlAlchemy? - python

I am using sqlalchemy (with MySQL as database) with cherrypy. I have created an engine which is application-wide. I read from SqlAlchemy Sessions Doc that sessions are not thread-safe. Does that mean I should create a separate session for each REST request? If so, does session use default connection pool in sqlalchemy (with pool_size=5 and max_overflow=10)? Does this mean that 15(pool_size + max_overflow) concurrent requests can be handled without a problem? Also, does a single connection-pool belong to sqlalchemy Engine or a single Session object?

It would be better to provide the part of your code. But first of all should read Official doc about Thread-local Sessions
As I can understand we need to use separate session for each thread. When I tried to make my first applications I didn't do like this and sometimes I got errors.
There is a SQLA tool for cherrpy, you can use this tool or write more simple for yourself.

Related

Using Flask and Flask-SQLAlchemy, how could I change the database connection in a route?

Business case:
We have multiple databases that need to be accessed and we don't know which until a URL/Route is called. The database server and database name are part of the route.
example: http://<flask_server>/<db_server>/<db_name>/weeklyreport
Since standard Flask-SQLAlchemy uses APP settings to define the DB connection and APP settings cannot (should not) be changed at runtime... how could one accomplish this?
I've just sidestepped the issue by using straight SQLAlchemy and not Flask-SQLAlchemy as I could find no way around this. For my needs I won't miss the benefits of the Flask wrappers.

Handle connections to user defined DB in Django

I have pretty simple model. User defines url and database name for his own Postgres server. My django backend fetches some info from client DB to make some calculations, analytics and draw some graphs.
How to handle connections? Create new one when client opens a page, or keep connections alive all the time?(about 250-300 possible clients)
Can I use Django ORM or smth like SQLAlchemy? Or even psycopg library?
Does anyone tackle such a problem before?
Thanks
In your case, I would rather go with Django internal implementation and follow Django ORM as you will not need to worry about handling connection and different exceptions that may arise during your own implementation of DAO model in your code.
As per your requirement, you need to access user database, there still exists overhead for individual users to create db and setup something to connect with your codebase. So, I thinking sticking with Django will be more profound.

Do I need to use SQLAlchemy sessions?

The official tutorial for SQLAlchemy provides examples that make use of the session system, such as the following:
>>> from sqlalchemy.orm import sessionmaker
>>> Session = sessionmaker(bind=engine)
Many unofficial tutorials also make use of sessions, however some don't make use of them at all, instead opting for whatever one would call this approach:
e = create_engine('sqlite:///company.db')
conn = e.connect()
query = conn.execute("SELECT first_name FROM employee")
Why are sessions needed at all when this much simpler system seems to do the same thing? The official documentation doesn't make it clear why this would be necessary, as far as I've seen.
There is one particularily relevant section in the official SQLAlchemy documentation:
A web application is the easiest case because such an application is already constructed around a single, consistent scope - this is the request, which represents an incoming request from a browser, the processing of that request to formulate a response, and finally the delivery of that response back to the client. Integrating web applications with the Session is then the straightforward task of linking the scope of the Session to that of the request. The Session can be established as the request begins, or using a lazy initialization pattern which establishes one as soon as it is needed. The request then proceeds, with some system in place where application logic can access the current Session in a manner associated with how the actual request object is accessed. As the request ends, the Session is torn down as well, usually through the usage of event hooks provided by the web framework. The transaction used by the Session may also be committed at this point, or alternatively the application may opt for an explicit commit pattern, only committing for those requests where one is warranted, but still always tearing down the Session unconditionally at the end.
...and...
Some web frameworks include infrastructure to assist in the task of
aligning the lifespan of a Session with that of a web request. This
includes products such as Flask-SQLAlchemy, for usage in conjunction
with the Flask web framework, and Zope-SQLAlchemy, typically used with
the Pyramid framework. SQLAlchemy recommends that these products be
used as available.
Unfortunately I still can't tell whether I need to use sessions, or if the final paragraph is implying that certain implementation such as Flask-SQLAlchemy are already managing sessions automatically.
Do I need to use sessions? Is there a significant risk to not using sessions? Am I already using sessions because I'm using Flask-SQLAlchemy?
Like you pointed out, sessions are not strictly necessary if you only construct and execute queries using plain SQLAlchemy Core. However, they provide higher layer of abstraction required to take advantage of SQLAlchemy ORM. Session maintains a graph of modified models and makes sure the changes are efficiently and consistently flushed to the database when necessary.
Since you already use Flask-SQLAlchemy, I don't see a reason to avoid sessions even if you don't need the ORM features. The extension handles all the plumbing necessary for isolating requests, so that you don't have to reinvent the wheel and can focus on your application code.
You need sessions if you want to use ORM features including:
Changing some attribute on an object returned by a query and having it easily written back to the database
create an object using normal python constructors and have it easily sent back to the database
Have relationships on objects. So, for example if you had a blog, and you had a post object, be able to write post.author to access the User object responsible for that post.
I'll note also that even if you use Session, you don't actually need sessionmaker. In a web application, you probably want it, but you can use sessions like this if you're looking for simplicity:
from sqlalchemy import create_engine
from sqlalchemy.orm import Session
engine = create_engine(...)
session = Session(bind = engine)
Again, I think you probably want sessionmaker in your app, but you don't need it for sessions to work

Best way to handle switching between multiple, large amount of databases in Flask with SQLAlchemy and Postgres

I have a large app, that has multiple databases, with identical schema. What is the best solution to implement, so I can have users dynamically, switch between databases to make queries. Creating an engine for each database is not a solution, as I receive a ' too many connections' postgres error. The main issue here is creating too many engines, so is there a way to remove an engine after using it? The number of databases will be thousands, with several hundred users at the same time.
Thanks.
Edit: Here is the connection code
engine = create_engine(datbase_uri)
session = sessionmaker(bind=engine)
^This is done every time a connection to particular database is required. And is done dynamically (when ever user requests within the app). The issue is when this is done multiple times, the 'too many connections' error is given.
What is the best way to go about closing the engine?
We had the same issue before. In our multi-tenant app we use set search_path to <schemaName> psql query to serve DMLs from paticular client. You can check detailed implementation for SQLAlchemy here
Regarding too many connections issue it's better to use connection pool. Answer for SQLAlchemy is here

sqlalchemy identity map question

The identity map and unit of work patterns are part of the reasons sqlalchemy is much more attractive than django.db. However, I am not sure how the identity map would work, or if it works when an application is configured as wsgi and the orm is accessed directly through api calls, instead of a shared service. I would imagine that apache would create a new thread with its own python instance for each request. Each instance would therefore have their own instance of the sqlalchemy classes and not be able to make use of the identity map. Is this correct?
I think you misunderstood the identity map pattern.
From : http://martinfowler.com/eaaCatalog/identityMap.html
An Identity Map keeps a record of all
objects that have been read from the
database in a single business
transaction.
Records are kept in the identity map for a single business transaction. This means that no matter how your web server is configured, you probably will not hold them for longer than a request (or store them in the session).
Normally, you will not have many users taking part in a single business transation. Anyway, you probably don't want your users to share objects, as they might end up doing things that are contradictory.
So this all depends on how you setup your sqlalchemy connection. Normally what you do is to manage each wsgi request to have it's own threadlocal session. This session will know about all of the goings-on of it, items added/changed/etc. However, each thread is not aware of the others. In this way the loading/preconfiguring of the models and mappings is shared during startup time, however each request can operate independent of the others.

Categories

Resources