I've been using the request/application context for some time without fully understanding how it works or why it was designed the way it was. What is the purpose of the "stack" when it comes to the request or application context? Are these two separate stacks, or are they both part of one stack? Is the request context pushed onto a stack, or is it a stack itself? Am I able to push/pop multiple contexts on top of eachother? If so, why would I want to do that?
Sorry for all the questions, but I'm still confused after reading the documentation for Request Context and Application Context.
Multiple Apps
The application context (and its purpose) is indeed confusing until you realize that Flask can have multiple apps. Imagine the situation where you want to have a single WSGI Python interpreter run multiple Flask application. We're not talking Blueprints here, we're talking entirely different Flask applications.
You might set this up similar to the Flask documentation section on "Application Dispatching" example:
from werkzeug.wsgi import DispatcherMiddleware
from frontend_app import application as frontend
from backend_app import application as backend
application = DispatcherMiddleware(frontend, {
'/backend': backend
})
Notice that there are two completely different Flask applications being created "frontend" and "backend". In other words, the Flask(...) application constructor has been called twice, creating two instances of a Flask application.
Contexts
When you are working with Flask, you often end up using global variables to access various functionality. For example, you probably have code that reads...
from flask import request
Then, during a view, you might use request to access the information of the current request. Obviously, request is not a normal global variable; in actuality, it is a context local value. In other words, there is some magic behind the scenes that says "when I call request.path, get the path attribute from the request object of the CURRENT request." Two different requests will have a different results for request.path.
In fact, even if you run Flask with multiple threads, Flask is smart enough to keep the request objects isolated. In doing so, it becomes possible for two threads, each handling a different request, to simultaneously call request.path and get the correct information for their respective requests.
Putting it Together
So we've already seen that Flask can handle multiple applications in the same interpreter, and also that because of the way that Flask allows you to use "context local" globals there must be some mechanism to determine what the "current" request is (in order to do things such as request.path).
Putting these ideas together, it should also make sense that Flask must have some way to determine what the "current" application is!
You probably also have code similar to the following:
from flask import url_for
Like our request example, the url_for function has logic that is dependent on the current environment. In this case, however, it is clear to see that the logic is heavily dependent on which app is considered the "current" app. In the frontend/backend example shown above, both the "frontend" and "backend" apps could have a "/login" route, and so url_for('/login') should return something different depending on if the view is handling the request for the frontend or backend app.
To answer your questions...
What is the purpose of the "stack" when it comes to the request or
application context?
From the Request Context docs:
Because the request context is internally maintained as a stack you
can push and pop multiple times. This is very handy to implement
things like internal redirects.
In other words, even though you typically will have 0 or 1 items on these stack of "current" requests or "current" applications, it is possible that you could have more.
The example given is where you would have your request return the results of an "internal redirect". Let's say a user requests A, but you want to return to the user B. In most cases, you issue a redirect to the user, and point the user to resource B, meaning the user will run a second request to fetch B. A slightly different way of handling this would be to do an internal redirect, which means that while processing A, Flask will make a new request to itself for resource B, and use the results of this second request as the results of the user's original request.
Are these two separate stacks, or are they both part of one stack?
They are two separate stacks. However, this is an implementation detail. What's more important is not so much that there is a stack, but the fact that at any time you can get the "current" app or request (top of the stack).
Is the request context pushed onto a stack, or is it a stack itself?
A "request context" is one item of the "request context stack". Similarly with the "app context" and "app context stack".
Am I able to push/pop multiple contexts on top of eachother? If so,
why would I want to do that?
In a Flask application, you typically would not do this. One example of where you might want to is for an internal redirect (described above). Even in that case, however, you would probably end up having Flask handle a new request, and so Flask would do all of the pushing/popping for you.
However, there are some cases where you'd want to manipulate the stack yourself.
Running code outside of a request
One typical problem people have is that they use the Flask-SQLAlchemy extension to set up a SQL database and model definition using code something like what is shown below...
app = Flask(__name__)
db = SQLAlchemy() # Initialize the Flask-SQLAlchemy extension object
db.init_app(app)
Then they use the app and db values in a script that should be run from the shell. For example, a "setup_tables.py" script...
from myapp import app, db
# Set up models
db.create_all()
In this case, the Flask-SQLAlchemy extension knows about the app application, but during create_all() it will throw an error complaining about there not being an application context. This error is justified; you never told Flask what application it should be dealing with when running the create_all method.
You might be wondering why you don't end up needing this with app.app_context() call when you run similar functions in your views. The reason is that Flask already handles the management of the application context for you when it is handling actual web requests. The problem really only comes up outside of these view functions (or other such callbacks), such as when using your models in a one-off script.
The resolution is to push the application context yourself, which can be done by doing...
from myapp import app, db
# Set up models
with app.app_context():
db.create_all()
This will push a new application context (using the application of app, remember there could be more than one application).
Testing
Another case where you would want to manipulate the stack is for testing. You could create a unit test that handles a request and you check the results:
import unittest
from flask import request
class MyTest(unittest.TestCase):
def test_thing(self):
with app.test_request_context('/?next=http://example.com/') as ctx:
# You can now view attributes on request context stack by using `request`.
# Now the request context stack is empty
Previous answers already give a nice overview of what goes on in the background of Flask during a request. If you haven't read it yet I recommend #MarkHildreth's answer prior to reading this. In short, a new context (thread) is created for each http request, which is why it's necessary to have a thread Local facility that allows objects such as request and g to be accessible globally across threads, while maintaining their request specific context. Furthermore, while processing an http request Flask can emulate additional requests from within, hence the necessity to store their respective context on a stack. Also, Flask allows multiple wsgi applications to run along each other within a single process, and more than one can be called to action during a request (each request creates a new application context), hence the need for a context stack for applications. That's a summary of what was covered in previous answers.
My goal now is to complement our current understanding by explaining how Flask and Werkzeug do what they do with these context locals. I simplified the code to enhance the understanding of its logic, but if you get this, you should be able to easily grasp most of what's in the actual source (werkzeug.local and flask.globals).
Let's first understand how Werkzeug implements thread Locals.
Local
When an http request comes in, it is processed within the context of a single thread. As an alternative mean to spawn a new context during an http request, Werkzeug also allows the use of greenlets (a sort of lighter "micro-threads") instead of normal threads. If you don't have greenlets installed it will revert to using threads instead. Each of these threads (or greenlets) are identifiable by a unique id, which you can retrieve with the module's get_ident() function. That function is the starting point to the magic behind having request, current_app,url_for, g, and other such context-bound global objects.
try:
from greenlet import get_ident
except ImportError:
from thread import get_ident
Now that we have our identity function we can know which thread we're on at any given time and we can create what's called a thread Local, a contextual object that can be accessed globally, but when you access its attributes they resolve to their value for that specific thread.
e.g.
# globally
local = Local()
# ...
# on thread 1
local.first_name = 'John'
# ...
# on thread 2
local.first_name = 'Debbie'
Both values are present on the globally accessible Local object at the same time, but accessing local.first_name within the context of thread 1 will give you 'John', whereas it will return 'Debbie' on thread 2.
How is that possible? Let's look at some (simplified) code:
class Local(object)
def __init__(self):
self.storage = {}
def __getattr__(self, name):
context_id = get_ident() # we get the current thread's or greenlet's id
contextual_storage = self.storage.setdefault(context_id, {})
try:
return contextual_storage[name]
except KeyError:
raise AttributeError(name)
def __setattr__(self, name, value):
context_id = get_ident()
contextual_storage = self.storage.setdefault(context_id, {})
contextual_storage[name] = value
def __release_local__(self):
context_id = get_ident()
self.storage.pop(context_id, None)
local = Local()
From the code above we can see that the magic boils down to get_ident() which identifies the current greenlet or thread. The Local storage then just uses that as a key to store any data contextual to the current thread.
You can have multiple Local objects per process and request, g, current_app and others could simply have been created like that. But that's not how it's done in Flask in which these are not technically Local objects, but more accurately LocalProxy objects. What's a LocalProxy?
LocalProxy
A LocalProxy is an object that queries a Local to find another object of interest (i.e. the object it proxies to). Let's take a look to understand:
class LocalProxy(object):
def __init__(self, local, name):
# `local` here is either an actual `Local` object, that can be used
# to find the object of interest, here identified by `name`, or it's
# a callable that can resolve to that proxied object
self.local = local
# `name` is an identifier that will be passed to the local to find the
# object of interest.
self.name = name
def _get_current_object(self):
# if `self.local` is truly a `Local` it means that it implements
# the `__release_local__()` method which, as its name implies, is
# normally used to release the local. We simply look for it here
# to identify which is actually a Local and which is rather just
# a callable:
if hasattr(self.local, '__release_local__'):
try:
return getattr(self.local, self.name)
except AttributeError:
raise RuntimeError('no object bound to %s' % self.name)
# if self.local is not actually a Local it must be a callable that
# would resolve to the object of interest.
return self.local(self.name)
# Now for the LocalProxy to perform its intended duties i.e. proxying
# to an underlying object located somewhere in a Local, we turn all magic
# methods into proxies for the same methods in the object of interest.
#property
def __dict__(self):
try:
return self._get_current_object().__dict__
except RuntimeError:
raise AttributeError('__dict__')
def __repr__(self):
try:
return repr(self._get_current_object())
except RuntimeError:
return '<%s unbound>' % self.__class__.__name__
def __bool__(self):
try:
return bool(self._get_current_object())
except RuntimeError:
return False
# ... etc etc ...
def __getattr__(self, name):
if name == '__members__':
return dir(self._get_current_object())
return getattr(self._get_current_object(), name)
def __setitem__(self, key, value):
self._get_current_object()[key] = value
def __delitem__(self, key):
del self._get_current_object()[key]
# ... and so on ...
__setattr__ = lambda x, n, v: setattr(x._get_current_object(), n, v)
__delattr__ = lambda x, n: delattr(x._get_current_object(), n)
__str__ = lambda x: str(x._get_current_object())
__lt__ = lambda x, o: x._get_current_object() < o
__le__ = lambda x, o: x._get_current_object() <= o
__eq__ = lambda x, o: x._get_current_object() == o
# ... and so forth ...
Now to create globally accessible proxies you would do
# this would happen some time near application start-up
local = Local()
request = LocalProxy(local, 'request')
g = LocalProxy(local, 'g')
and now some time early over the course of a request you would store some objects inside the local that the previously created proxies can access, no matter which thread we're on
# this would happen early during processing of an http request
local.request = RequestContext(http_environment)
local.g = SomeGeneralPurposeContainer()
The advantage of using LocalProxy as globally accessible objects rather than making them Locals themselves is that it simplifies their management. You only just need a single Local object to create many globally accessible proxies. At the end of the request, during cleanup, you simply release the one Local (i.e. you pop the context_id from its storage) and don't bother with the proxies, they're still globally accessible and still defer to the one Local to find their object of interest for subsequent http requests.
# this would happen some time near the end of request processing
release(local) # aka local.__release_local__()
To simplify the creation of a LocalProxy when we already have a Local, Werkzeug implements the Local.__call__() magic method as follows:
class Local(object):
# ...
# ... all same stuff as before go here ...
# ...
def __call__(self, name):
return LocalProxy(self, name)
# now you can do
local = Local()
request = local('request')
g = local('g')
However, if you look in the Flask source (flask.globals) that's still not how request, g, current_app and session are created. As we've established, Flask can spawn multiple "fake" requests (from a single true http request) and in the process also push multiple application contexts. This isn't a common use-case, but it's a capability of the framework. Since these "concurrent" requests and apps are still limited to run with only one having the "focus" at any time, it makes sense to use a stack for their respective context. Whenever a new request is spawned or one of the applications is called, they push their context at the top of their respective stack. Flask uses LocalStack objects for this purpose. When they conclude their business they pop the context out of the stack.
LocalStack
This is what a LocalStack looks like (again the code is simplified to facilitate understanding of its logic).
class LocalStack(object):
def __init__(self):
self.local = Local()
def push(self, obj):
"""Pushes a new item to the stack"""
rv = getattr(self.local, 'stack', None)
if rv is None:
self.local.stack = rv = []
rv.append(obj)
return rv
def pop(self):
"""Removes the topmost item from the stack, will return the
old value or `None` if the stack was already empty.
"""
stack = getattr(self.local, 'stack', None)
if stack is None:
return None
elif len(stack) == 1:
release_local(self.local) # this simply releases the local
return stack[-1]
else:
return stack.pop()
#property
def top(self):
"""The topmost item on the stack. If the stack is empty,
`None` is returned.
"""
try:
return self.local.stack[-1]
except (AttributeError, IndexError):
return None
Note from the above that a LocalStack is a stack stored in a local, not a bunch of locals stored on a stack. This implies that although the stack is globally accessible it's a different stack in each thread.
Flask doesn't have its request, current_app, g, and session objects resolving directly to a LocalStack, it rather uses LocalProxy objects that wrap a lookup function (instead of a Local object) that will find the underlying object from the LocalStack:
_request_ctx_stack = LocalStack()
def _find_request():
top = _request_ctx_stack.top
if top is None:
raise RuntimeError('working outside of request context')
return top.request
request = LocalProxy(_find_request)
def _find_session():
top = _request_ctx_stack.top
if top is None:
raise RuntimeError('working outside of request context')
return top.session
session = LocalProxy(_find_session)
_app_ctx_stack = LocalStack()
def _find_g():
top = _app_ctx_stack.top
if top is None:
raise RuntimeError('working outside of application context')
return top.g
g = LocalProxy(_find_g)
def _find_app():
top = _app_ctx_stack.top
if top is None:
raise RuntimeError('working outside of application context')
return top.app
current_app = LocalProxy(_find_app)
All these are declared at application start-up, but do not actually resolve to anything until a request context or application context is pushed to their respective stack.
If you're curious to see how a context is actually inserted in the stack (and subsequently popped out), look in flask.app.Flask.wsgi_app() which is the point of entry of the wsgi app (i.e. what the web server calls and pass the http environment to when a request comes in), and follow the creation of the RequestContext object all through its subsequent push() into _request_ctx_stack. Once pushed at the top of the stack, it's accessible via _request_ctx_stack.top. Here's some abbreviated code to demonstrate the flow:
So you start an app and make it available to the WSGI server...
app = Flask(*config, **kwconfig)
# ...
Later an http request comes in and the WSGI server calls the app with the usual params...
app(environ, start_response) # aka app.__call__(environ, start_response)
This is roughly what happens in the app...
def Flask(object):
# ...
def __call__(self, environ, start_response):
return self.wsgi_app(environ, start_response)
def wsgi_app(self, environ, start_response):
ctx = RequestContext(self, environ)
ctx.push()
try:
# process the request here
# raise error if any
# return Response
finally:
ctx.pop()
# ...
and this is roughly what happens with RequestContext...
class RequestContext(object):
def __init__(self, app, environ, request=None):
self.app = app
if request is None:
request = app.request_class(environ)
self.request = request
self.url_adapter = app.create_url_adapter(self.request)
self.session = self.app.open_session(self.request)
if self.session is None:
self.session = self.app.make_null_session()
self.flashes = None
def push(self):
_request_ctx_stack.push(self)
def pop(self):
_request_ctx_stack.pop()
Say a request has finished initializing, the lookup for request.path from one of your view functions would therefore go as follow:
start from the globally accessible LocalProxy object request.
to find its underlying object of interest (the object it's proxying to) it calls its lookup function _find_request() (the function it registered as its self.local).
that function queries the LocalStack object _request_ctx_stack for the top context on the stack.
to find the top context, the LocalStack object first queries its inner Local attribute (self.local) for the stack property that was previously stored there.
from the stack it gets the top context
and top.request is thus resolved as the underlying object of interest.
from that object we get the path attribute
So we've seen how Local, LocalProxy, and LocalStack work, now think for a moment of the implications and nuances in retrieving the path from:
a request object that would be a simple globally accessible object.
a request object that would be a local.
a request object stored as an attribute of a local.
a request object that is a proxy to an object stored in a local.
a request object stored on a stack, that is in turn stored in a local.
a request object that is a proxy to an object on a stack stored in a local. <- this is what Flask does.
Little addition #Mark Hildreth's answer.
Context stack look like {thread.get_ident(): []}, where [] called "stack" because used only append (push), pop and [-1] (__getitem__(-1)) operations. So context stack will keep actual data for thread or greenlet thread.
current_app, g, request, session and etc is LocalProxy object which just overrided special methods __getattr__, __getitem__, __call__, __eq__ and etc. and return value from context stack top ([-1]) by argument name (current_app, request for example).
LocalProxy needed to import this objects once and they will not miss actuality. So better just import request where ever you are in code instead play with sending request argument down to you functions and methods. You can easy write own extensions with it, but do not forget that frivolous usage can make code more difficult for understanding.
Spend time to understand https://github.com/mitsuhiko/werkzeug/blob/master/werkzeug/local.py.
So how populated both stacks? On request Flask:
create request_context by environment (init map_adapter, match path)
enter or push this request:
clear previous request_context
create app_context if it missed and pushed to application context stack
this request pushed to request context stack
init session if it missed
dispatch request
clear request and pop it from stack
Lets take one example , suppose you want to set a usercontext (using flask construct of Local and LocalProxy).
Define one User class :
class User(object):
def __init__(self):
self.userid = None
define a function to retrive user object inside current thread or greenlet
def get_user(_local):
try:
# get user object in current thread or greenlet
return _local.user
except AttributeError:
# if user object is not set in current thread ,set empty user object
_local.user = User()
return _local.user
Now define a LocalProxy
usercontext = LocalProxy(partial(get_user, Local()))
Now to get userid of user in current thread
usercontext.userid
explanation :
Local has a dict of identity and object. Identity is a threadid or greenlet id. In this example, _local.user = User() is eqivalent to _local.___storage__[current thread's id] ["user"] = User()
LocalProxy delegates operation to wrapped up Local object or you can provide a function that returns target object. In above example get_user function provides current user object to LocalProxy, and when you ask for current user's userid by usercontext.userid, LocalProxy's __getattr__ function first calls get_user to get User object (user) and then calls getattr(user,"userid"). To set userid on User (in current thread or greenlet) you simply do : usercontext.userid = "user_123"
Related
The recommended way to use httpx.Client() is as a context manager that will ensure the connections get properly cleaned-up upon exiting the with block.
But let us suppose I want to write a class that will instantiate an httpx.Client() session that can be reused throughout our code without having to put my entire script inside a with block.
class APIWrapper:
def __init__(self):
self.session = httpx.Client()
self.token = fetch_oauth_token()
def fetch_oauth_token(self, **kwargs):
r = self.session.get(endpoint)
# Perform an authorization_code flow.
return token
def get(self, endpoint):
r = self.session.get(endpoint, headers=self.headers)
return r
def __exit__(self):
self.session.close()
api = APIWrapper()
api.get('https://api.some.url/statistics?location=worldwide')
<1000 lines of code>
api.get('https://api.some.url/users?location=denver')
In the illustrative example above I'm hoping to use a session for the API's OAuth authentication flow and later it can be re-used for any API calls that the user makes.
Is this a legit way to go about things or is it not a great idea? Would it be better to use separate sessions and force the user to use a with context manager for their own calls?
While searching I have seen that a session needs to be closed under the class __exit__ function. Is using the __exit__ function sufficient for ensuring proper clean-up (even if exceptions were to occur)? Is it equivalent to using the with-block way of doing it?
I'm setting up an aiohttp server using aiohttp_session to store data into an EncryptedCookieStorage. I use it to store a 7-days valid token, along with the expiration date and a refresh token.
I want, no matter which endpoint the client is accessing, to check if the token (stored in the session) needs some refreshment. The choice of a middleware was pretty obvious.
The problem is, when I call await aiohttp_session.get_session(request), I'm getting a nice RuntimeError asking me to setup the aiohttp_session middleware to the aiohttp.web.Application. My guess is that my custom middleware was called before the one handling the session loading, thus the session is not accessible yet. I've searched for some "priority" system regarding middlewares, but haven't found anything.
My server is set up in a main.py file like:
def main():
app = web.Application()
middleware.setup(app)
session_key = base64.urlsafe_b64decode(fernet.Fernet.generate_key())
aiohttp_session.setup(app, EncryptedCookieStorage(session_key))
# I have tried swapping the two setup functions
web.run_app(app)
if __name__ == '__main__':
main()
Where the middleware.setup() is in a separate package, in the __init__.py:
# For each python file in the package, add it's middleware function to the app middlewares
def setup(app):
for filename in listdir('middleware'):
if filename[-2:] == 'py' and filename[:2] != '__':
module = __import__('rpdashboard.middleware.' + filename[:-3], fromlist=['middleware'])
app.middlewares.append(module.middleware)
And finally, the middleware I want to get the session in is:
#web.middleware
async def refresh_token_middleware(request, handler):
session = await get_session(request)
if session.get('token'):
pass # To be implemented ...
return await handler(request)
middleware = refresh_token_middleware
The execution issues here:
# From aiohttp_session
async def get_session(request):
session = request.get(SESSION_KEY)
if session is None:
storage = request.get(STORAGE_KEY)
if storage is None:
# This is raised
raise RuntimeError(
"Install aiohttp_session middleware "
"in your aiohttp.web.Application")
As I was saying earlier, it seems like the session is not meant to be accessed in a middleware, and isn't loaded yet. So how would I prevent my custom middleware to run before the session loading one? Or maybe simply run manually the aiohttp_session middleware myself? Is it even possible?
Yes, middleware components added to the app in the right order can access the session storage set by the session middleware.
The aiohttp documentation covers the priority order for middleware components in their Middlewares section:
Internally, a single request handler is constructed by applying the middleware chain to the original handler in reverse order, and is called by the RequestHandler as a regular handler.
Further down, they use an example to demonstrate what this means. In summary, they use two middleware components that report their entry and exit, and add them to the app.middlewares list in this order:
... middlewares=[middleware1, middleware2]
This ordering produces the following output:
Middleware 1 called
Middleware 2 called
Handler function called
Middleware 2 finished
Middleware 1 finished
So an incoming request is passed along the different middleware in the same order they are added to the app.middlewares list.
Next, aiohttp_session also documents how they add their session middleware, in the API entry for aiohttp_session.setup():
The function is shortcut for:
app.middlewares.append(session_middleware(storage))
So their middleware component is added to the end of the list. Per above that means that anything that requires access to the session must come after this middleware component.
All that the session middleware does is add the storage to the request under the aiohttp_session.STORAGE_KEY key; this makes the sessions available to any further middleware components that follow it. Your middleware components do not need to do anything special other than be added after the session middleware and leave the storage object added to the request in place. The request object is designed to share data between components this way.
Your code puts all your middleware components before the session middleware component:
middleware.setup(app)
# ...
aiohttp_session.setup(app, EncryptedCookieStorage(session_key))
This gives you an ordering of [..., refresh_token_middleware, ..., session_middleware] and your middleware can’t access any session information.
So you have to swap the order; call aiohttp_session.setup() first, and only then add your own components:
aiohttp_session.setup(app, EncryptedCookieStorage(session_key))
middleware.setup(app)
If you still have issues accessing the session storage then that means one of the intervening middleware components is removing the session storage information again.
You could use the following middleware factory at various locations to report on the session storage being present to help you debug this:
from aiohttp import web
from aiohttp_session import STORAGE_KEY
COUNTER_KEY = "__debug_session_storage_counter__"
_label = {
False: "\x1b[31;1mMISSING\x1b[0m",
True: "\x1b[32;1mPRESENT\x1b[0m",
}
def debug_session_storage(app):
pre = nxt = ""
if app.middlewares:
previous = app.middlewares[-1]
name = getattr(previous, "__qualname__", repr(previous))
pre = f" {name} ->"
nxt = f" {name} <-"
#web.middleware
async def middleware(request, handler):
counter = request.get(COUNTER_KEY, -1) + 1
request[COUNTER_KEY] = counter
found = STORAGE_KEY in request
indent = " " * counter
print(f"{indent}-{pre} probe#{counter} - storage: {_label[found]}")
try:
return await handler(request)
finally:
print(f"{indent}-{nxt} probe#{counter} - done")
app.middlewares.append(middleware)
If you insert this between every piece of middleware you add you should be able to figure out if and where the session storage is being lost:
def setup(app):
# start with a probe
debug_session_storage(app)
for filename in listdir('middleware'):
if filename[-2:] == 'py' and filename[:2] != '__':
module = __import__('rpdashboard.middleware.' + filename[:-3], fromlist=['middleware'])
app.middlewares.append(module.middleware)
# Add debug probe after every component
debug_session_storage(app)
This should tell you
what middleware component preceded each probe
if the session storage is present, using ANSI green and red colours to make it easy to spot
if any have reset the request entirely; if the probe counts start at 0 again then something cleared not only the session key but the probe counter as well!
You yourself change the order. The code should be like this
def main():
app = web.Application()
session_key = base64.urlsafe_b64decode(fernet.Fernet.generate_key())
aiohttp_session.setup(app, EncryptedCookieStorage(session_key))
# I have tried swapping the two setup functions
middleware.setup(app)
web.run_app(app)
If you look at the code for aiohttp_session.setup
https://github.com/aio-libs/aiohttp-session/blob/master/aiohttp_session/init.py
def setup(app, storage):
"""Setup the library in aiohttp fashion."""
app.middlewares.append(session_middleware(storage))
As you can see the middleware is added in this function. Adding your middleware before middleware.setup(app) makes the session still not available for the request
I am playing around with Flask, striving to understand details of how sessions are working, I am using:
Python 3.6.1
Flask 0.12.2
Flask documentation clearly states (bold is mine):
The session object works pretty much like an ordinary dict, with the
difference that it keeps track on modifications.
This is a proxy.
...
Section on proxies mentions that (again, bold is mine):
If you need to get access
to the underlying object that is proxied, you can use the
_get_current_object() method
Thus, underlying object (session._get_current_object()) must remain the same for the request or, as was suggested by this answer and comment, a thread. Though, it does not persist nor inside request, nor thread.
Here is a demonstration code:
import threading
from flask import (
Flask,
session,
)
app = Flask(__name__)
app.secret_key = 'some random secret key'
#app.route('/')
def index():
print("session ID is: {}".format(id(session)))
print("session._get_current_object() ID is: {}".format(id(session._get_current_object())))
print("threading.current_thread().ident is: {}".format(threading.current_thread().ident))
print('________________________________')
return 'Check the console! ;-)'
If you will run Flask application above, and repeatedly go to the / — id returned by session._get_current_object() will, occasionally, change, while threading.current_thread().ident never changes.
This leads me to ask the following questions:
What exactly is returned by session._get_current_object()?
I get that it is an object underlying session proxy, but what this underlying object is bound to (if it is not a request and not a thread, if anything I would expect it never to change, for the simple application above)?
What exactly is returned by session._get_current_object()?
Technically speaking, it is the object referenced in the session attribute of the top-most element in the LocalStack instance named _request_ctx_stack.
This top-most element of that stack is a RequestContext that is instantiated in Flask.wsgi_app, which is called for every HTTP request. The RequestContext implements methods to push and pop itself to and from the local stack _request_ctx_stack. The push method also takes care of requesting a new session for the context.
This session is what is made available in the session proxy; the request, that the RequestContext has been initialized with, is made available via the request proxy. These two proxies are only usable inside a request context, i.e. with an active HTTP request being processed.
I get that it is an object underlying session proxy, but what this
underlying object is bound to (if it is not a request and not a
thread, if anything I would expect it never to change, for the simple
application above)?
As outlined above, the request context's session, proxied by the session local proxy, belongs to the RequestContext. And it is changing with every request. As documented in Lifetime of the Context, a new context is created for each request, and it creates a new session every time push is executed.
The id of session._get_current_object() staying the same between consecutive requests is, probably, due to the new session object being created in the same memory address that the old one from the previous request occupied.
See also: How the Context Works section of the Flask documentation.
Here is a modified code snippet, to illustrate answer by shmee
import threading
from flask import (
Flask,
session,
request
)
app = Flask(__name__)
app.secret_key = 'some random secret key'
#app.route('/')
def index():
print(">>> session <<<")
session_id = id(session)
session_object_id = id(session._get_current_object())
print("ID: {}".format(session_id),
"Same as previous: {}".format(session.get('prev_sess_id', '') == session_id))
print("_get_current_object() ID: {}".format(session_object_id),
"Same as previous: {}".format(session.get('prev_sess_obj_id', '') == session_object_id))
session['prev_sess_id'] = session_id
session['prev_sess_obj_id'] = session_object_id
print("\n>>> request <<<")
request_id = id(request)
request_object_id = id(request._get_current_object())
print("request ID is: {}".format(request_id),
"Same as previous: {}".format(session.get('prev_request_id', '') == request_id))
print("request._get_current_object() ID is: {}".format(id(request._get_current_object())),
"Same as previous: {}".format(session.get('prev_request_obj_id', '') == request_object_id))
session['prev_request_id'] = request_id
session['prev_request_obj_id'] = request_object_id
print("\n>>> thread <<<")
thread_id = threading.current_thread().ident
print("threading.current_thread().ident is: {}".format(threading.current_thread().ident),
"Same as previous: {}".format(session.get('prev_thread', '') == thread_id))
session['prev_thread'] = thread_id
print('-' * 100)
return 'Check the console! ;-)'
The only obscurity left is, indeed, why sometimes session._get_current_object() remains unchanged between between consecutive requests. And as suggested by shmee (bold is mine), it is:
probably, due to the new session object being created in the same memory address that the old one from the previous request occupied.
I am using Flask, with the flask-session plugin for server-side sessions stored in a Redis backend. I have flask set up to use persistent sessions, with a session timeout. How can I make an AJAX request to get the time remaining on the session without resetting the timeout?
The idea is for the client to check with the server before displaying a timeout warning (or logging out the user) in case the user is active in a different tab/window of the same browser.
EDIT: after some digging, I found the config directive SESSION_REFRESH_EACH_REQUEST, which it appears I should be able to use to accomplish what I want: set that to False, and then the session should only be refreshed if something actually changes in the session, so I should be able to make a request to get the timeout without the session timeout changing. It was added in 0.11, and I'm running 0.11.1, so it should be available.
Unfortunately, in practice this doesn't appear to work - at least when checking the ttl of the redis key to get the time remain. I checked, and session.modified is False, so it's not just that I am doing something in the request that modifies the session (unless it just doesn't set that flag)
The following works, though it is rather hacky:
In the application __init__.py, or wherever you call Session(app) or init_app(app):
#set up the session
Session(app)
# Save a reference to the original save_session function so we can call it
original_save_session = app.session_interface.save_session
#----------------------------------------------------------------------
def discretionary_save_session(self, *args, **kwargs):
"""A wrapper for the save_session function of the app session interface to
allow the option of not saving the session when calling specific functions,
for example if the client needs to get information about the session
(say, expiration time) without changing the session."""
# bypass: list of functions on which we do NOT want to update the session when called
# This could be put in the config or the like
#
# Improvement idea: "mark" functions on which we want to bypass saving in
# some way, then check for that mark here, rather than having a hard-coded list.
bypass = ['check_timeout']
#convert function names to URL's
bypass = [flask.url_for(x) for x in bypass]
if not flask.request.path in bypass:
# if the current request path isn't in our bypass list, go ahead and
# save the session normally
return original_save_session(self, *args, **kwargs)
# Override the save_session function to ours
app.session_interface.save_session = discretionary_save_session
Then, in the check_timeout function (which is in the bypass list, above), we can do something like the following to get the remaining time on the session:
#app.route('/auth/check_timeout')
def check_timeout():
""""""
session_id = flask.session.sid
# Or however you want to get a redis instance
redis = app.config.get('REDIS_MASTER')
# If used
session_prefix = app.config.get('SESSION_KEY_PREFIX')
#combine prefix and session id to get the session key stored in redis
redis_key = "{}{}".format(session_prefix, session_id)
# The redis ttl is the time remaining before the session expires
time_remain = redis.ttl(redis_key)
return str(time_remain)
I'm sure the above can be improved upon, however the result is as desired: when calling /auth/check_timeout, the time remaining on the session is returned without modifying the session in any way.
I have a NewRequest event handler (subscriber) in Pyramid which looks like this:
#subscriber(NewRequest)
def new_request_subscriber(event):
request = event.request
print('Opening DB conn')
// Open the DB
request.db = my_connect_to_db()
request.add_finished_callback(close_db_connection)
However, I have observed that a connection to the DB is opened even if the request goes to a static asset, which is obviously unnecessary. Is there a way, from the NewRequest handler, to check if the request is bound for a static asset? I have tried comparing the view_name to my static view's name, but apparently the view_name attribute is not available at this early stage of processing the request.
If anyone has any interesting ideas about this, please let me know!
The brute force way is to compare the request.path variable to your static view's root, a la request.path.startswith('/static/').
The method I like the best and use in my own apps is to add a property to the request object called db that is lazily evaluated upon access. So while you may add it to the request, it doesn't do anything until it is accessed.
import types
def get_db_connection(request):
if not hasattr(request, '_db'):
request._db = my_connect_to_db()
request.add_finished_callback(close_db_connection)
return request._db
def new_request_subscriber(event):
request = event.request
request.db = types.MethodType(get_db_connection, request)
Later in your code you can access request.db() to get the connection. Unfortunately it's not possible to add a property to an object at runtime (afaik), so you can't set it up so that request.db gives you what you want. You can get this behavior without using a subscriber by the cookbook entry where you subclass Request and add your own lazy property via Pyramid's #reify decorator.
def _connection(request):
print "******Create connection***"
#conn = request.registry.dbsession()
conn = MySQLdb.connect("localhost", "DB_Login_Name", "DB_Password", "data_base_name")
def cleanup(_):
conn.close()
request.add_finished_callback(cleanup)
return conn
#subscriber(NewRequest)
def new_request_subscriber(event):
print "new_request_subscriber"
request = event.request
request.set_property(_connection, "db", reify = True)
try this one, I reference fallow web page
http://pyramid.readthedocs.org/en/1.3-branch/api/request.html
"set_property" section, it works for me.