Waiting for API response in python3 - python

(background)
I have an ERP application which is managed from a Weblogic Console. Recently we noticed that the same activities that we perform from the console can be performed using the vendor provided REST API calls. So we wanted to utilize this approach programatically and try to build some automations.
This is the page from where we can control one of the instance ConsoleImage
The same button acts as Stop and Start to manage the start and stop instance.
Both the start and stop have different API calls which makes sense.
The complete API doc is at : https://docs.oracle.com/cd/E61420_01/doc.92/e80710/smcrestapis.htm#BABFHBJI
(Now)
I wrote a program in python using the request method to call these APIs and it works fine.
The API response can take anywhere between 20 to 30 seconds when I use the stopInstance API
And normally takes 60 to 90 seconds when I use the startInstance API, but if there is an issue when starting the instance it takes more than 300 seconds and goes into indefinate wait.
My problem is, while starting an instance I want to wait maximum only for 100 seconds for the response. If it takes more than 100 seconds the program should display a message like "Instance was not able to start in 100 seconds"
This is my program. I am taking input from a text file and all the values present there have been verified.
import requests
import json
import importlib.machinery
import importlib.util
import numpy
import time
import sys
loader = importlib.machinery.SourceFileLoader('SM','sm_details.txt')
spec = importlib.util.spec_from_loader(loader.name, loader)
mod = importlib.util.module_from_spec(spec)
loader.exec_module(mod)
username = str(mod.username)
password = str(mod.password)
hostname = str(mod.servermanagerHostname)
portnum = str(mod.servermanagerPort)
instanceDetails = numpy.array(mod.instanceName)
authenticationAPI = "http://"+hostname+":"+portnum+"/manage/mgmtrestservice/authenticate"
startInstanceAPI = "http://"+hostname+":"+portnum+"/manage/mgmtrestservice/startinstance"
headers = {
'Content-Type':'application/json',
'Cache-Control':'no-cache',
}
data = {}
data['username']= username
data['password']= password
instanceNameDict = {'instanceName':''}
#Authentication request and storing token
response = requests.post(authenticationAPI, data=json.dumps(data), headers=headers)
token = response.headers['TOKEN']
head2 = {}
head2['TOKEN']=token
def start(instance):
print(f'\nTrying to start instance : '+instance['instanceName'])
startInstanceResponse = requests.post(startInstanceAPI,data=json.dumps(instance), headers=head2) #this is where the program is stuck and it does not move to the time.sleep step
time.sleep(100)
if startInstanceResponse.status_code == 200:
print('Instance '+instance['instanceName']+' started.')
else:
print('Could not start instance in 100 seconds')
sys.exit(1)

I would suggest you to use the timeout parameter in requests:
requests.post(startInstanceAPI,data=json.dumps(instance), headers=head2, timeout=100.0)
You can tell Requests to stop waiting for a response after a given
number of seconds with the timeout parameter. Nearly all production
code should use this parameter in nearly all requests. Failure to do
so can cause your program to hang indefinitely.
Source
Here's the requests timeout documentation, you will also find more details in there and Exception handling.

Related

Python post request display message if response taking longer than x seconds

I have the following python code that fetches data from a remote json file. The processing of the remote json file can sometimes be quick or sometimes take a little while. So I put the please wait print message before the post request. This works fine. However, I find that for the requests that are quick, the please wait is pointless. Is there a way I can display the please wait message if request is taking longer than x seconds?
try:
print("Please wait")
r = requests.post(url = "http://localhost/test.php")
r_data = r.json()
you can do it using multiple threads as follows:
import threading
from urllib import request
from asyncio import sleep
def th():
sleep(2) # if download takes more than 2 seconds
if not isDone:
print("Please wait...")
dl_thread = threading.Thread(target=th) # create new thread that executes function th when the thread is started
dl_thread.start() # start the thread
isDone = False # variable to track the request status
r = request.post(url="http://localhost/test.php")
isDone = True
r_data = r.json()

Simple function to respect Twitter's V2 API rate limits?

Problem:
Often we'd like to pull much more data than Twitter would like us to at one time. In between each query it would be wonderful if there was a simple function to call that checks if you need to wait.
Question:
What is a simple function for respecting Twitter's API limits and ensuring that any long-running-query will complete successfully without harassing Twitter and ensure the querying user does not get banned?
Ideal Answer:
The most ideal answer would be a portable function that should work in all situations. That is, finish (properly) no matter what, and respect Twitter's API rate limit rules.
Caveat
I have submitted a working answer of my own but I am unsure if there is a way to improve it.
I am developing a Python package to utilize Twitter's new V2 API. I want to make sure that I am respecting Twitter's rate limits as best as I possibly can.
Below are the two functions used to wait when needed. They check the API call response headers for remaining queries and then also rely on Twitter's HTTP codes provided here as an ultimate backup. As far as I can tell, these three HTTP codes are the only time-related errors, and the others should raise issues for an API user to inform them of whatever they are doing incorrectly.
from datetime import datetime
from osometweet.utils import pause_until
def manage_rate_limits(response):
"""Manage Twitter V2 Rate Limits
This method takes in a `requests` response object after querying
Twitter and uses the headers["x-rate-limit-remaining"] and
headers["x-rate-limit-reset"] headers objects to manage Twitter's
most common, time-dependent HTTP errors.
"""
while True:
# Get number of requests left with our tokens
remaining_requests = int(response.headers["x-rate-limit-remaining"])
# If that number is one, we get the reset-time
# and wait until then, plus 15 seconds.
# The regular 429 exception is caught below as well,
# however, we want to program defensively, where possible.
if remaining_requests == 1:
buffer_wait_time = 15
resume_time = datetime.fromtimestamp( int(response.headers["x-rate-limit-reset"]) + buffer_wait_time )
print(f"One request from being rate limited. Waiting on Twitter.\n\tResume Time: {resume_time}")
pause_until(resume_time)
# Explicitly checking for time dependent errors.
# Most of these errors can be solved simply by waiting
# a little while and pinging Twitter again - so that's what we do.
if response.status_code != 200:
# Too many requests error
if response.status_code == 429:
buffer_wait_time = 15
resume_time = datetime.fromtimestamp( int(response.headers["x-rate-limit-reset"]) + buffer_wait_time )
print(f"Too many requests. Waiting on Twitter.\n\tResume Time: {resume_time}")
pause_until(resume_time)
# Twitter internal server error
elif response.status_code == 500:
# Twitter needs a break, so we wait 30 seconds
resume_time = datetime.now().timestamp() + 30
print(f"Internal server error # Twitter. Giving Twitter a break...\n\tResume Time: {resume_time}")
pause_until(resume_time)
# Twitter service unavailable error
elif response.status_code == 503:
# Twitter needs a break, so we wait 30 seconds
resume_time = datetime.now().timestamp() + 30
print(f"Twitter service unavailable. Giving Twitter a break...\n\tResume Time: {resume_time}")
pause_until(resume_time)
# If we get this far, we've done something wrong and should exit
raise Exception(
"Request returned an error: {} {}".format(
response.status_code, response.text
)
)
# Each time we get a 200 response, exit the function and return the response object
if response.ok:
return response
Here is the pause_until function.
def pause_until(time):
""" Pause your program until a specific end time. 'time' is either
a valid datetime object or unix timestamp in seconds (i.e. seconds
since Unix epoch) """
end = time
# Convert datetime to unix timestamp and adjust for locality
if isinstance(time, datetime):
# If we're on Python 3 and the user specified a timezone,
# convert to UTC and get tje timestamp.
if sys.version_info[0] >= 3 and time.tzinfo:
end = time.astimezone(timezone.utc).timestamp()
else:
zoneDiff = pytime.time() - (datetime.now() - datetime(1970, 1, 1)).total_seconds()
end = (time - datetime(1970, 1, 1)).total_seconds() + zoneDiff
# Type check
if not isinstance(end, (int, float)):
raise Exception('The time parameter is not a number or datetime object')
# Now we wait
while True:
now = pytime.time()
diff = end - now
#
# Time is up!
#
if diff <= 0:
break
else:
# 'logarithmic' sleeping to minimize loop iterations
sleep(diff / 2)
This seems to work quite nicely but I'm not sure if there are edge-cases that will break this or if there is simply a more elegant/simple way to do this.

How to update object configuration periodically?

I`m writing a connector to our CRM system. CRM has its own configurations I want to be aware of. CRM is the only source of trust for these configurations and provides them via an API. Now I have my connector in python package as a python class. CRM configurations are updated on init, but as soon as they can be changed from CRM I want them to be periodically updated. Is there any good way to create some kind of task on object instance creation to perform configuration updates?
class Crm:
def __init__(self, crm_config, mongo_connection_string):
self.update_crm_configuration()
def update_crm_configuration(self):
self.crm_configuration = self.get_crm_configuration_from_crm_api()
def get_crm_configuration_from_crm_api(self):
r = self._send_crm_request_wrap(send_request_func=self._send_get_crm_configuration)
return self._parse_crm_configuration_response(r.text)
Now I update configurations one time, but I need to update them periodically.
It appears the best way is not to use a different thread or task with periodic updates, but save last time the configuration was updated and if this time exits some timeout update the configuration before actually performing the request.
Or if your API has the luxury of good exceptions for the "Configuration was changed" it is even better to perform the configuration update on response handler before the request retry.
I`m using a request wrapper for these purposes.
def _send_crm_request_wrap(self, send_request_func, func_params=(),
check_crm_configuration=True,
retries_limit=None):
if check_crm_configuration \
and time.time() - self.last_update_crm_configuration_time > CRM_CONFIGURATION_TIMEOUT:
self.update_crm_configuration()
while self.is_crm_locked():
time.sleep(1000)
if not self.is_authorized():
self.auth()
r = send_request_func(*func_params)
if retries_limit is None:
retries_limit = self.max_retries
retry = 1
while r.status_code == 205 and retry <= retries_limit:
waiting_time = randint(1000, 2000)
logging.info(f'Retry {retry} for {send_request_func.__name__}. Waiting for {waiting_time} sec')
time.sleep(waiting_time)
r = send_request_func(*func_params)
retry += 1
if r.status_code not in [200]:
message = f'AMO CRM {send_request_func.__name__} with args={func_params} failed. ' \
f'Error: {r.status_code} {r.text}'
logging.error(message)
raise ConnectionError(message)
return r

how to send muliptle requests and make sure the response comes back within a second in python

I am trying to validate what the throttle limit for an endpoint using python code.
Basically I have set Throttlelimit on the endpoint I am testing is 3calls/sec. The test does 4 calls and checks the status codes to have atleast 1 429 response.
The validation I have fails sometimes because it looks like the responses take more than a second to respond. The code I tried are:
Method1:
request = requests.Request(method='GET', url=GLOBALS["url"], params=context.payload, headers=context.headers)
context.upperlimit = int(GLOBALS["ThrottleLimit"]) + 1
reqs = [request for i in range(0, context.upperlimit)]
with BaseThrottler(name='base-throttler', reqs_over_time=(context.upperlimit, 1)) as bt:
throttled_requests = bt.multi_submit(reqs)
context.responses = [tr.response for tr in throttled_requests]
assert(429 in [ i.status_code for i in context.responses])
Method2:
request = requests.get(url=GLOBALS["url"], params=context.payload, headers=context.headers)
url = request.url
urls = set([])
for i in range(0, context.upperlimit):
urls.add(grequests.get(url))
context.responses = grequests.map(urls)
assert(429 in [ i.status_code for i in context.responses])
Is there a way that I can make sure all the responses came back in the same second and if not it should try again before failing the test.
I suppose you are using requests and grequests library. You can set a timeout as explained in the docs and also for grequests.
Plain requests
requests.get(url, timeout=1)
Using grequests
grequests.get(url, timeout=1)
Timeout value is "number of seconds"
Using timeout won't necessarily ensure the condition that you are looking for, which is that all 4 requests were received by the endpoint within one second (not that each individual response was received within one second of sending the request).
One quick and dirty way to solve this is to simply time the execution of the code, and ensure that all responses were received in less than a second (using the timeit module)
start_time = timeit.default_timer()
context.responses = grequests.map(urls)
elapsed = timeit.default_timer() - start_time
if elapsed < 1:
assert(429 in [ i.status_code for i in context.responses])
This is crude because it is checking round trip time, but will ensure that all requests were received within a second. If you need more specificity, or find that the condition is not met often enough, you could add a header to the response with the exact time the request was received by the endpoint, and then verify that all requests hit the endpoint within one second of each other.

PyAPNs and the need to Sleep between Sends

I am using PyAPNs to send notifications to iOS devices. I am often sending groups of notifications at once. If any of the tokens is bad for any reason, the process will stop. As a result I am using the enhanced setup and the following method:
apns.gateway_server.register_response_listener
I use this to track which token was the problem and then I pick up from there sending the rest. The issue is that when sending the only way to trap these errors is to use a sleep timer between token sends. For example:
for x in self.retryAPNList:
apns.gateway_server.send_notification(x, payload, identifier = token)
time.sleep(0.5)
If I don't use a sleep timer no errors are caught and thus my entire APN list is not sent to as the process stops when there is a bad token. However, this sleep timer is somewhat arbitrary. Sometimes the .5 seconds is enough while other times I have had to set it to 1. In no case has it worked without some sleep delay being added. Doing this slows down web calls and it feels less than bullet proof to enter random sleep times.
Any suggestions for how this can work without a delay between APN calls or is there a best practice for the delay needed?
Adding more code due to the request made below. Here are 3 methods inside of a class that I use to control this:
class PushAdmin(webapp2.RequestHandler):
retryAPNList=[]
channelID=""
channelName = ""
userName=""
apns = APNs(use_sandbox=True,cert_file="mycert.pem", key_file="mykey.pem", enhanced=True)
def devChannelPush(self,channel,name,sendAlerts):
ucs = UsedChannelStore()
pus = PushUpdateStore()
channelName = ""
refreshApnList = pus.getAPN(channel)
if sendAlerts:
alertApnList,channelName = ucs.getAPN(channel)
if not alertApnList: alertApnList=[]
if not refreshApnList: refreshApnList=[]
pushApnList = list(set(alertApnList+refreshApnList))
elif refreshApnList:
pushApnList = refreshApnList
else:
pushApnList = []
self.retryAPNList = pushApnList
self.channelID = channel
self.channelName = channelName
self.userName = name
self.retryAPNPush()
def retryAPNPush(self):
token = -1
payload = Payload(alert="A message from " +self.userName+ " posted to "+self.channelName, sound="default", badge=1, custom={"channel":self.channelID})
if len(self.retryAPNList)>0:
token +=1
for x in self.retryAPNList:
self.apns.gateway_server.send_notification(x, payload, identifier = token)
time.sleep(0.5)
Below is the calling class (abbreviate to reduce non-related items):
class ChannelStore(ndb.Model):
def writeMessage(self,ID,name,message,imageKey,fileKey):
notify = PushAdmin()
notify.devChannelPush(ID,name,True)
Below is the slight change I made to the placement of the sleep timer that seems to have resolved the issue. I am, however, still concerned for whether the time given will be the right amount in all circumstances.
def retryAPNPush(self):
identifier = 1
token = -1
payload = Payload(alert="A message from " +self.userName+ " posted to "+self.channelName, sound="default", badge=1, custom={"channel":self.channelID})
if len(self.retryAPNList)>0:
token +=1
for x in self.retryAPNList:
self.apns.gateway_server.send_notification(x, payload, identifier = token)
time.sleep(0.5)
Resolution:
As noted in the comments at bottom, the resolution to this problem was to move the following statement to the module level outside the class. By doing this there is no need for any sleep statements.
apns = APNs(use_sandbox=True,cert_file="mycert.pem", key_file="mykey.pem", enhanced=True)
In fact, PyAPNS will auto resend dropped notifications for you, please see PyAPNS
So you don't have to retry by yourself, you can just record what notifications have bad tokens.
The behavior of your code might be result from APNS object kept in local scope (within if len(self.retryAPNList)>0:)
I suggest you to pull out APNS object to class or module level, so that it can complete its error handling procedure and reuse the TCP connection.
Please kindly let me know if it helps, thanks :)

Categories

Resources