Python request module getting first response after reqeust.get api call - python

I am making an GET call through requests in Python 3.7. This GET request will trigger a job. Once the job gets triggered on the host side it has set of attributes such as run id and other parameters. The host has an idle timeout of like 120 seconds and the job that was triggered will run more than 120 seconds. Since the get request is blocked until a response is thrown, in this case it times out after 120 seconds we are getting expected 504 error. But if the job completes within 120 seconds, the get request header response has details of run id and other attributes.
What I am trying to accomplish is the moment request.get is submitted, is there a way to get immediate response back with run id and other details. I can use that run id to poll the the host to get back the response code even after 120 seconds through a separate API call. I tried to search the blog but was unsuccessful. If request module cannot help in this case, please advice if other modules will come handy for my need.
Any input is much appreciated.

Related

Python Request Library multiple API calls sequentially in for loop

In our project we are reading historical data(past 45 days data) from solr db.We can read maximum 5 days data in single api call so we are calling api sequentially in a for loop for 45 days.But observing some 400 status codes for some api call' in between randomly..like for total 9 api calls some giving 200 and some 400 response codes in between randomly..like if I rerun my job, then api call which gave 400 earlier might give 200 this time.
I checked with API owner, they said it is because you are calling next api call before earlier one is completed.
1.) how to identify if api request is completed in python , so that can call next api only when previous request is completed.This can be answered by API owner or there is any way in python request library
2.) should i think of using sleep statement after each api call.But how to know sleep time and is this efficient way ?
Thanks

Python GET request from live/infinite API endpoint

I want to get info from API via Python, which has infinite info updating (it is live - for example live video or live monitoring). So I want to stop this GET request after interval (for example 1 second), then process these information and then repeat this cycle.
Any ideas? (now I am using requests module, but I do not know, how to stop receiving data and then get them)
I might be off here, but if you hit an endpoint at a specific time, it should return the JSON at that particular moment. You could then store it and use it in whatever process you have created.
If you want to hit it again, you would just use requests to hit the endpoint.

waiting for completion of get request

I have to get several pages of a json API with about 130'000 entries.
The request is fairly simple with:
response = requests.request("GET", url, headers=headers, params=querystring)
Where the querystring is an access token and the headers fairly simple.
I created a while loop where basically every request url is in the form of
https://urlprovider.com/endpointname?pageSize=10000&rowStart=0
and the rowStart increments by pageSize until there is no more further pages.
The problem I encounter is the following response after about 5-8 successful requests:
{'errorCode': 'ERROR_XXX', 'code': 503, 'message': 'Maximum limit for unprocessed API requests have been reached. Please try again later.', 'success': False}
From the error message I get that I initiate the next request before the last has finished. Does anyone know how I can make sure the get request has finished before the next one starts (except something crude like a sleep()) or if the error could lie elsewhere?
I found the answer to my question.
Requests is synchronous, meaning that it will ALWAYS wait until the the call has finished before continuing
The response from the API provider is misleading, as the request has thus already been processed before the next one comes.
The root cause is difficult to assess, but it may be to do with a limit imposed by the API provider
What has worked:
A crude sleep(10), which makes the program wait 10 seconds before processing the next request
Better solution: Create a Session. According to the documentation:
The Session object [...] will use urllib3’s connection pooling. So if you’re making several requests to the same host, the underlying TCP connection will be reused, which can result in a significant performance increase (see HTTP persistent connection).
Not only does this resolve the problem but also increases the performance compared to my initial code.

How to know if requestor connection closed/disconnected from client side, Python Pyramid

From front-end side I have a searchbox which sends GET request after every new change detected in the text field. To disregard the response if new string has been detected I simply do a cancel on my previous axios GET request if its still in process before making a new one.
Example:
Text Field = i
GET i
Text Field = ie
Cancel GET i if still waiting for response
GET ie
Text Field = ie1
Cancel GET ie1 if still waiting for response
GET ie1
On backend i keep getting Broken Pipeline error. My restful api is made in python using pyramid and need help with canceling trying to send response back if connection closes.
How can i get pyramid to check if the request client connection is still open or not. Or what is some other solution to this problem.
It is not possible to check if a connection is open or not.
Most web servers are designed to hard close / kill thread / kill process running the HTTP request. The request itself cannot know if the client has somehow disconnected until it tries to write the request.
The best is to ignore this problem and configure your logging so that these hanging requests do not cause verbose output.

Python - Get timing property from network response

How to get the properties as shown on the image (Blocked, DNS resolution, Connecting ...) after sending the request?
From firefox, the waiting time = ~650ms
From python, requests.Response.elapsed.total_seconds() = ~750ms
Since the result is difference, i want to have a more details result as shown on firefox developer mode.
You can only get the total time of the request because the response doesn´t know more itself.
More informations are only logged by the programms which does handle the request and start-stopping a timer for some steps.
You need to track times in/on you connection-framework or you can have a look on the FireFox API for "timings" - there are some more APIs so maybe you find something you are able to use for your case - but main fact is you can´t do it directly and only with your script because request and response are fired/catched and logging/getting data does happen between other components then.

Categories

Resources