Modifying quart.request - acceptable? (Python API) - python

I need to store some per-request metrics and telemetry (such as timestamps, etc) in quart (the python web framework). Is it acceptable behaviour to modify quart.request and add variables?
It appears to work, and it's similar to how I would have done it in Flask but I'm not sure if it is considered bad practice in Quart.
The background is that I want to store fine-grained telemetry (namely time stamps for when certain things happen inside a request) and not just the total request time.
Regards,
Niklas

Yea, best to extend the Request class and then assign this new class to the request_class attribute on the app object

Related

transfer data from a class to webpage in flask

I packed some defs in a class to do a work of translation text into the other language using Bing or Google translation API. and using a Flask to render templates as you may aware of.
However, when do the translation, I have some status and progress information, like completed 10% and etc during translate the paragraphs. but that information generated in the translation class as you may imagine - the class does the translation job.
However, in my Flask app, after call the class do the translation I want to have an ajax call from the webpage to the Flask App to retrieve that 10% information which generated from that class.
Here is what I did:
If I don't use any class, put all the defs within the main Flask App file, I can use the a global variable store the 10% information, but that makes the codes complex and I want to pack all the associated defs in a class.
in the Flask App, I was trying to use session['translation_pos'] to retrieve the information that I stored in session['translation_pos'] in the class, but it seems not works.
I use python 3 and Flask, I don't know how to get this progress percentage information from the class - where the data generated - to the App.
May be one of the way would like store the number in a text file or some places, and read the file in the App, but I was thinking that certainly shouldn't the way to handle this problems.
Would anyone could advise with some idea that will be much appreciated.
Thank you All.
You may want to look at a different approach to running the task, using something like Celery or Redis Queue - covered very well here in the mega tutorial.
By using one of these, you can run the task and query the runner periodically for progress to report it back to the user.
If it were me, for the data processing I would store this in a database. When the task is completed, it gets re-queried and passed through to the UI as a template variable (or streamed from an ajax function if it's a large data set).

Preserving value of variables between subsequent requests in Python Django

I have a Django application to log the character sequences from an autocomplete interface. Each time a call is made to the server, the parameters are added to a list and when the user submits the query, the list is written to a file.
Since I am not sure how to preserve the list between subsequent calls, I relied on a global variable say query_logger. Now I can preserve the list in the following way:
def log_query(query, completions, submitted=False):
global query_logger
if query_logger is None:
query_logger = list()
query_logger.append(query, completions, submitted)
if submitted:
query_logger = None
While this hack works for a single client sending requests I don't think this is a stable solution when requests come from multiple clients. My question is two-fold:
What is the order of execution of requests: Do they follow first come first serve (especially if the requests are asynchronous)?
What is a better approach for doing this?
If your django server is single-threaded, then yes, it will respond to requests as it receives them. If you're using wsgi or another proxy, that becomes more complicated. Regardless, I think you'll want to use a db to store the information.
I encountered a similar problem and ended up using sqlite to store the data temporarily, because that's super simple and easy to manage. You'll want to use IP addresses or create a unique ID passed as a url parameter in order to identify clients on subsequent requests.
I also scheduled a daily task (using cron on ubuntu) that goes through and removes any incomplete requests that haven't been completed (excluding those started in the last hour).
You must not use global variables for this.
The proper answer is to use the session - that is exactly what it is for.
Simplest (bad) solution would be to have a global variable. Which means you need some in memory location or a db to store this info

Django bootstrap/middleware/enter-exit

I have following problem. I want to add to django some kind of setup/teardown for each request. For example at the beginning of per user request I want to collect start data collection and at the end of request dump all data to database (1).
What comes to my mind right now, at the start of middleware instantiate an object (like singleton), every other part of the code can import this object, use its methods and then same middleware before returning response will scrap the object. The only concern I have is to be a threadsafe, so maybe create a global dict, and register keys that are build upon url + session_id hash or maybe request object id (internal python object id, maybe is good way to go?). At the end of request key will be scrapped from dict.
Any recommendations, thoughts, ideas?
(1) Please do not ask me why I cannot access DB directly or anything like this. This is only an example. I'm looking for general idea for something like enter and exit but request-response wise that can be imported in any place in a code and safely used.
In your middleware, you can create new object for data you want to maintain and put it in request.META dict. It will be available wherever, request is available. In this case, I don't think you need to worry about thread-safety as each request will create new object.
If you want to just create data object once when request processing starts, destroy after processing the request and no other code references this data then you can look at request_started and request_finished signals.
Middleware is very certainly not thread-safe. You should not store anything per-request either on the middleware object, or in the global namespace.
The usual way to do this is sort of thing to annotate it onto the request object. Middleware and views have access to this, but to get it anywhere else (eg in the model) you'll need to pass it around.

django vars in ram

I am implementing a really lightweight Web Project, which has just one page, showing data in a diagram. I use Django as a Webserver and d3.js as plotting routine for this diagram. As you can imagine, there are just a few simple time series which have to be responded by Django server, so I was wondering if I simply could hold this variable in ram. My first test was positive, I had something like this in my views.py:
X = np.array([123,23,1,32,123,1])
#csrf_exempt
def getGraph(request):
global X
return HttpResponse(json.dumps(X))
Notice, X is updated by another function every now and then, but all user access is read-only. Do I have to deal with
security issues by defining a global variable?
inconsistencies in general?
I found a thread discussing global variables in Django, but in that case, the difficulty is of handling multiple write-access.
To answer potential questions on why I don't want store data in database: All data I got in my X is already stored in a huge remote database and this web app just needs to display data.
Storing it in a variable does indeed have threading implications (and also scalibility - what if you have two Django servers running the same app?). The advice from the Django community is don't!.
This sounds like a good fit for the Django cache system though. Just cache your getGraph view with #cache_page and the job is done. No need to use memcache, the built-in in-memory memory-cache cache-backend* will work fine. Put a very high number as the time-out on the cache (years).
This way you are storing the HTTP response (JSON) not the value of X. But from your code sample, that is not a problem. If you need to re-calculate X you need to re-calculate the JSON, and if you need to re-calculate the JSON you will need to re-calculate X.
https://docs.djangoproject.com/en/dev/topics/cache/?from=olddocs/
1 or just 'built-in memory backend', I couldn't resist

Dynamically select database based on request

I'm trying to keep my RESTful site DRY, and I can't come up with a good way to factor out the code to dynamically select from each "user's" separate database. We've got a separate database for each client. This comes in as a part of the URL, and is passed into each view as a keyword arg. I want to give each and every view the behavior of accessing the corresponding database WITHOUT have to make sure each programmer writing a view remembers to use
Thing.objects.using(user).all()
and
t = Thing()
t.save(using=user)
every time. It seems like there ought to be some way to intercept the request and set the default database based on the args to the view before it hits the view, allowing us to use the usual
Thing.objects.all()
This would also have the advantage of factoring out all the user resolution code into a more appropriate place.
We do this by the following technique.
Apache picks off the first part of the path and routes this to a specific mod_wsgi Daemon.
Each mod_wsgi daemon is a different customer's installation.
We have many parallel customers, each with (nearly) identical code, all based off a single common installation of the base software.
Each customer has a separate settings.py with their unique configuration.
They don't (actually can't) know about each other because Apache has peeled off the top layer of the path for us.

Categories

Resources