When are Flask Resources created? - python

I'm new to Flask. I have a resource class that handles POST requests. The request processing is quite elaborate, and can't be all in the post function. Do I get a new Resource instance for each request? Or are instances reused? Is it safe for me to do something like:
class MyResource(Resource):
def post(self):
self.var = 17
self.do_some_work()
return self.var * self.var
Does Flask guarantee that my resource instance will not be used for other transactions?

Resource objects are created at the time the request should be served and they are not persistent. Keep in mind that REST principles say that APIs must be stateless. If you want to store data between requests, you should use some kind of database.
The simplest method to prove what I said is to use a print (id(self)) in your get handler and trigger the request a few times. You will see that the object always changes.
Now, if you are interested about Flask internals, here we go.
The class Resource is part of Flask-RESTtful and the documentation states the following:
Resources are built on top of Flask pluggable views, giving you easy access to multiple HTTP methods just by defining methods on your resource.
Resources are added with the method Resource.add_resource() and it is simply registering the underlying View object.
if self.app is not None:
self._register_view(self.app, resource, *urls, **kwargs)
else:
self.resources.append((resource, urls, kwargs))
Resource._register_view() method does a lot of crazy stuff, but the most meaningful things are those two lines:
resource_func = self.output(resource.as_view(endpoint, *resource_class_args, **resource_class_kwargs))
...
self.blueprint_setup.add_url_rule(url, view_func=resource_func, **kwargs)
Here you can see that the view object provides a handler that will be associated with the URL path. This handler will be called every time a HTTP request is made to this route.
Finally, we arrived at the core, in the View.as_view() method, it creates a function on-the-fly and this function will represent the route handler.
def view(*args, **kwargs):
self = view.view_class(*class_args, **class_kwargs)
return self.dispatch_request(*args, **kwargs)
As you can see, this function will create a new object every time a request must be dispatched and as you already guessed, view_class is containing your custom class for handling the requests.

Related

Django equivalent to Flask g? Storing data on flask.g vs flask.request?

As far as I understand, flask.g offers temporary storage for the current request context (even though it's technically the application context as described here). Accessing g.my_data during a request handler will ensure that my_data is for the current request. Does Django have something equivalent to this?
In my experimentation, Django's request object, which is passed into view functions, can be used the same as flask.g. I can simply use request.my_data and be ensured that my_data is for the current request.
Noticing, this I tried using flask.request similar to how I used flask.g, with equivalent results. This begs the question what does flask.g provide over flask.request, just peace of mind that flask.request attributes will not be overwritten?
FYI on use case, I'm sharing data between the actual request handler (flask) or view functions (django), and the middleware (django) or #before_request (flask) handlers.
This source seems to recommend putting data on the request.
As does this source.
This leads me to answer yes for number 1 below, but wonder even more about number 2??
TLDR:
Can Django request be used equivalently to flask.g?
Can flask.request be used equivalently to flask.g, or what is the benefit of flask.g over flask.request?
Django has the context_data and the get_context() method. To access this data override the get_context_data() method in your view like this.
def get_context_data(self, **kwargs):
context = super(NameOfView, self).get_context_data(**kwargs)
context.update(...)
return context

Google App Engine - Securing url of cron python

I'm a newbie to google app engine. I want the security restriction for url of cron so that it shouldn't be accessible by url directly. For this I've already read the docs and some of Q&As ([Google app engine: security of cron jobs).
I implemented the login : admin solution suggested in this link. But I failed to implement security as self.request.headers.get('X-AppEngine-Cron') is always None, whether it is cron or accessed via url directly.
So I don't know from where is the request coming (from cron or direct access)
def cron_method(BaseRestHandler):
def check_if_cron(self, *args, **kwargs):
if self.request.headers.get('X-AppEngine-Cron') is None:
logging.info("error-- not cron")
self.UNAUTH = "cron"
self.raise_exception()
else:
return BaseRestHandler(self, *args, **kwargs)
return check_if_cron
I used customized handler BaseRestHandler for other authentications.
#cron_method
def put(self):
logging.info("inside put--")
This is called via taskqueue from the get method of the class.
The problem is I didn't get header X-AppEngine-Cron
Any other logic or method will be appreciated.
Thanks In Advance.
It seems you attempted to make the check a decorator.
But your code shows the decorator applied to a put() method, not a get() method - the cron executes only on a get().
Also your decorator doesn't look quite right to me. Shouldn't a decorator take as argument a function and return some locally defined function which executes (not returns) the function received as argument?
I'd suggest you go back to basics - try to make the header check in the get method of the handler itself and only after you get that working consider further, more complex changes like the pulling the check in a decorator.
It is more likely that your decorator is not working than GAE's documented infra to not be working. Keeping things simple (at first) would at least help your investigation effort be pointed in a better direction.
Try this:
def cron_method(handler_method):
def check_if_cron(self, *args, **kwargs):
if self.request.headers.get('X-AppEngine-Cron') is None:
logging.info("error-- not cron")
self.UNAUTH = "cron"
self.raise_exception()
else:
handler_method(self, *args, **kwargs)
return check_if_cron
As for the invocations from the task queue - those requests are no longer cron requests, even if the tasks are created and enqueued by a cron request.
From Securing task handler URLs:
If a task performs sensitive operations (such as modifying data), you
might want to secure its worker URL to prevent a malicious external
user from calling it directly. You can prevent users from accessing
task URLs by restricting access to App Engine administrators.
Task requests themselves are issued by App Engine and can always
target restricted URL.
You can restrict a URL by adding the login: admin element to the
handler configuration in your app.yaml file.
If you want to also prevent manual access to those URLs (i.e. restrict it only to task queue requests) you can perform header checks similar to the cron one. The header values are listed in Reading request headers. Personally I picked X-AppEngine-TaskName.

Django: app level variables

I've created a Django-rest-framework app. It exposes some API which does some get/set operations in the MySQL DB.
I have a requirement of making an HTTP request to another server and piggyback this response along with the usual response. I'm trying to use a self-made HTTP connection pool to make HTTP requests instead of making new connections on each request.
What is the most appropriate place to keep this app level HTTP connection pool object?
I've looked around for it & there are multiple solutions each with some cons. Here are some:
To make a singleton class of the pool in a diff file, but this is not a good pythonic way to do things. There are various discussions over why not to use singleton design pattern.
Also, I don't know how intelligent it would be to pool a pooler? (:P)
To keep it in init.py of the app dir. The issue with that are as follows:
It should only contain imports & things related to that.
It will be difficult to unit test the code because the import would happen before mocking and it would actually try to hit the API.
To use sessions, but I guess that makes more sense if it was something user session specific, like a user specific number, etc
Also, the object needs to be serializable. I don't know how HTTP Connection pool can be serialized.
To keep it global in views.py but that also is discouraged.
What is the best place to store such app/global level variables?
This thread is a bit old but still could be googled. generally, if you want a component to be accessible among several apps in your Django project you can put it in a general or core app as a Util or whatever.
in terms of reusability and app-specific you can use a Factory with a cache mechanism something like:
class ConnectionPool:
pass
#dataclass
class ConnectionPoolFactory:
connection_pool_cache: dict[str: ConnectionPool] = field(default_factory=dict)
def get_connection(self, app_name: str) -> ConnectionPool:
if self.connection_pool_cache.get(app_name, None) is None:
self.connection_pool_cache[app_name] = ConnectionPool()
return self.connection_pool_cache[app_name]
A possible solution is to implement a custom Django middleware, as described in https://docs.djangoproject.com/ja/1.9/topics/http/middleware/.
You could initialize the HTTP connection pool in the middleware's __init__ method, which is only called once at the first request. Then, start the HTTP request during process_request and on process_response check it has finished (or wait for it) and append that response to the internal one.

How do I run an action for all requests in Flask?

I have some code I want to run for every request that comes into Flask-- specifically adding some analytics information. I know I could do this with a decorator, but I'd rather not waste the extra lines of code for each of my views. Is there a way to just write this code in a catch all that will be applied before or after each view?
Flask has dedicated hooks called before and after requests. Surprisingly, they are called:
Flask.before_request()
Flask.after_request()
Both are decorators:
#app.before_request
def do_something_whenever_a_request_comes_in():
# request is available
#app.after_request
def do_something_whenever_a_request_has_been_handled(response):
# we have a response to manipulate, always return one
return response

Proper way to establish persistent complex object in Flask

I am looking for the "good practice" advice on how to handle a persistent object in Flask.
I have my own classes that handle user, groups, user membership in groups and user/group permissions. Among those, there is a Passport class that holds information about the current user and their permissions.
The idea is that each user session should be associated with its own Passport object that persists over the views: so that certain permissions could be initialized upon user login, and can be checked later while using the views and performing AJAX requests.
Currently I have serialize and deserialize methods in the Passport class, and a FlaskPassport class that is initialized in the views.py global scope, that has a read-only "passport" property that reads the serialized passport data from a session variable and returns deserialized object. And it has a save() method that does the opposite. This FlaskPassport class also has a decorator method for views that checks the permissions. And the code that gives access to the passport data that are stored in the session in serialized state looks pretty clumsy. The fact that the passport object has to be manually saved after alteration doesn't seem right - it should be so that Flask saves the altered passport object to the session after the request is processed automatically.
So, I am looking for some clever pattern that would give access to a global passport object, accessible to all views, and also let me add decorators to the views that need permission checking.
There are multiple ways to do this, including:
Storing the passport instance on g and using a before_request and after_request handler pair to hydrate / serialize the instance from / to the session:
#app.before_request
def load_passport():
if "passport_id" in session:
g.passport = create_passport_from_id(session["passport_id"])
#app.after_request
def serialize_passport(response):
if hasattr(g, "passport"):
session["passport_id"] = g.passport.id
return response
Use the thread-local pattern that Flask uses for request and g (among others). Under the hood this uses Werkzeug's LocalProxy, which is mounted on either the application context or the request context (depending on the lifetime of the underlying object):
from flask import (_request_ctx_stack as request_ctx,
has_request_context, session)
from werkzeug.local import LocalProxy
current_passport = LocalProxy(get_passport)
def get_passport():
if has_request_context() and "passport_id" in session:
if not hasattr(request_ct.top, "passport"):
passport_id = session["passport_id"]
request_ctx.top.passport = construct_passport_from_id(passport_id)
return getattr(request_ctx.top, "passport", None)
return EmptyPassport()
#app.after_request
def serialize(response):
if current_passport.is_not_empty():
session["passport_id"] = current_passport.id
return response
It is worth noting that I have chosen not to serialize the entire passport to the session, since that information is passed back and forth with every request (depending on how much information you are storing in your passport, this may or may not be something that concerns you).
It is also worth noting that neither of these approaches is inherently secure. Flask does sign the session cookie to make it tamper proof, but you'll still need to worry about logout, freshness, etc. Take a look at Flask-Login's code for some of the other things you'll want to think about.

Categories

Resources