I have a Flask app which does some external API calls and combines this data.
The external API call can take up to ~40 seconds.
Currently I cache the result using flask_caching with an expiry time of 1 hour.
#app.route('/api/pos')
#cache.cached(timeout=3600)
def get_pos():
return jsonify( [LONG TIME API CALL] )
How do I make flask automatically do the external API call when it's caches expires, as to refresh the cache? Instead of the user having to wait 40 seconds when the cache expired.
I was thinking about a Crob job which calls my flask app every 1 hour with for example cUrl. But there has to be a prettier method.
So to summarize: is there some kind of event which can trigger when flasks cache times out?
Best regards, sorry for my formulation of the question as English is not my native language.
Related
In our project we are reading historical data(past 45 days data) from solr db.We can read maximum 5 days data in single api call so we are calling api sequentially in a for loop for 45 days.But observing some 400 status codes for some api call' in between randomly..like for total 9 api calls some giving 200 and some 400 response codes in between randomly..like if I rerun my job, then api call which gave 400 earlier might give 200 this time.
I checked with API owner, they said it is because you are calling next api call before earlier one is completed.
1.) how to identify if api request is completed in python , so that can call next api only when previous request is completed.This can be answered by API owner or there is any way in python request library
2.) should i think of using sleep statement after each api call.But how to know sleep time and is this efficient way ?
Thanks
I have a rest API (Python based). The underlying logic calls an Oracle procedure that refreshes certain materialized views. The application is hosted on Openshift Container Platform. Now sometimes the app gets stuck on this step( refresh materialized views).
Is there a way to add a liveness probe here that restarts the container if the app remains stuck at this step for some amount of time say 2 hours.
Is there a way to add a liveness probe here that restarts the container if the app remains stuck at this step for some amount of time say 2 hours.
Yes, that would be possible, however you would need to implement the logic yourself.
Liveness probes typically check the return code of a command or the HTTP response from a REST endpoint. So in your case you would likely need to create a new REST endpoint that checks and will return an error if any step is taking longer than a certain time. If that is the case, the endpoint should return a HTTP error code such as 500.
I have one DB query that takes a couple of seconds in production. I have also a DRF ViewSet action that returns this query.
I'm already caching this action using cache_page.
#method_decorator(cache_page(settings.DEFAULT_CACHE_TIMEOUT))
#action(detail=False)
def home(self, request) -> Response:
articles = Article.objects.home()
return Response(serializers.ArticleListSerializer(articles, many=True).data,
headers={'Access-Control-Allow-Origin': '*'})
The problem is that after 15 minutes, at least one user needs to wait 15 seconds for the response. I want to pre-cache this every 5 minutes in background so that no user will need to wait.
I use the default caching mechanism.
My idea is to create a management command that will be executed using crontab. Every 5 minutes it will call the Article.objects.home() or the ViewSet.action and change it's value in the cache.
As this is only one entry, I don't hesitate to use database caching.
How would you do that?
EDIT: as the default LocMemCache is single-threaded, I'll go with the database caching. I just don't know how to manually cache the view or QuerySet.
A cron or Celery beat task (if you already use celery) looks like the best option.
Calling Article.objects.home() would not do much unless you cache in home() method of the manager (which could be a valid option that could simplify automated cache refresh).
To automate the refresh of view cache you better send actual requests to the URL from the management command. You will also want to invalidate the cache before sending the request, in order to update it.
Also, keep in mind the cache timeout when planning the job frequency. You wouldn't want to refresh too early nor too late.
We have Azure http triggered function app(f1) which talks to another http triggered function app(f2) that has a prediction algorithm.
Depending upon input request size from function(f1), the response time of function(f2) increase a lot.
When the response time of function(f2) is more, the functions get timed out at 320 seconds.
Our requirement is to provide prediction algorithm as a
service(f2)
An orchestration API(f1) which will be called by the client and
based on the clients input request (f1) will collect the
data from database do data-validation and pass the data to
(f2) for prediction
After prediction (f2) would respond back predicted result to
(f1)
Once (f1) receives the response from (f2), (f1) would respond
back to client.
We are searching for alternative azure approach or solution which will
reduce the latency of an API and also the condition is to have f2
as a service.
If it takes more than 5 minutes in total to validate user input, retrieve additional data, feed it to the model and run the model itself, you might want to look at something different than APIs that return response synchronously.
With these kinds of running times, I would recommend a asynchronous pattern, such as F1 stores all data on a Azure Queue, F2 (Queue triggered) runs the model and stores data in database. Requestor can monitor database for updates. Of F1 takes the most time, than create a F0 that stores the request on a Queue and make F1 queue triggered as well.
As described in Limits for Http Trigger:
If a function that uses the HTTP trigger doesn't complete within 230 seconds, the Azure Load Balancer will time out and return an HTTP 502 error. The function will continue running but will be unable to return an HTTP response.
So it's not possible to make f1 and/or f2 Http Triggered.
Alternatives are many, but none can be synchronous (due to limitation above) if:
Interface to end user is REST API and
API is served by Http Triggered Azure Function and
Time needed to serve request is greater than 230 seconds.
Assuming:
Interface to end user is REST API and
API is served by Http Triggered Azure Function
one async possibility would be like this:
PS: I retained f1 and f2, which do the same as in your design. Though their trigger/output change.
Http Triggered REST API from f3 is the entry point for end user to trigger the job. Which would post to queue q1 and return a job-id / status-url as response.
user can query/poll current status/result of job by job-id using another Http Trigger API served by f4.
f1 and f2 are now triggered by queue trigger
f1, f2 and f3 update status for each job-id whenever needed into ADLS (which could be anything else like Redis cache or Table Storage etc).
f4 need not a separate function, it can be served as a different path/method from f3.
Two questions regarding session timeouts in cherrypy:
1) Is there a way to determine the remaining time in a session? This is related to trying to use http://plugins.jquery.com/epilgrim.sessionTimeoutHandler/
2) Is there a way to make a call to cherrypy NOT reset the timeout, such that the plugin above could call a URL to determine the time remaining in the session without resetting said time
Edit to help clarify: The purpose here is to be able to have a client-side process that periodically queries the server via AJAX to determine the amount of time left in a users session. This is to overcome difficulties with keeping a client side session timeout timer in-sync with the server-side timer - I'd like to simply have the client ask the server "how much time do I have left?" and act accordingly. Of course, if the act of asking resets the timeout, then this won't work, as the AJAX "time left" requests would essentially become a session keep-alive. So I need to be able to make an AJAX query to the server without resetting the session timeout timer for the user.
I believe cherrypy uses the expiration time in the cookie with the key session_id. Mine says:
Wed 22 Jan 2014 03:44:31 PM EST
You could extend the expiration with your set of circumstances and edit the session cookie.
EDIT: You will also need to extend the server timeout...
cherrypy.request.config.update({'tools.sessions.timeout': 60})
https://groups.google.com/forum/#!topic/cherrypy-users/2yrG79QoYFQ
Hope this helps!
You need to subclass the session and add a "stats" function to it and a flag to prevent saving in the session "stats" request handler. Or disable sessions in the config for the "stats" path and load session exp info directly from your storage without using normal session class.
I have found the answer to 2) question while going through source code of cherrypy session class. Apparently, you do not want to save session after serving such requests - this will then also not update expiration time (and will not save any changes to session object).
I found in the source code that setting cherrypy.serving.request._sessionsaved = True does exactly that. And added decorator for convinience:
def nosessionsave( func ):
"""
Decorator to avoid session saving and thus not resetting session timeout.
"""
def decorate( *args, **data ):
cherrypy.serving.request._sessionsaved = True
return func( *args, **data )
return decorate
Just add #nosessionsave before method def.