With the help of OpenAPI Generator I generated a Python client for the Amadeus Travel Restrictions API spec.
According to the README generated, I understand that I can pass headers with the header_params parameter but I receive the following error:
TypeError: g_et_covid_report() got an unexpected keyword argument 'header_params'
Below you can check my code:
import openapi_client
from openapi_client.apis.tags import covid19_area_report_api
from openapi_client.model.disease_area_report import DiseaseAreaReport
from openapi_client.model.error import Error
from openapi_client.model.meta import Meta
from openapi_client.model.warning import Warning
from pprint import pprint
configuration = openapi_client.Configuration(
host = "https://test.api.amadeus.com/v2"
)
with openapi_client.ApiClient(configuration) as api_client:
api_instance = covid19_area_report_api.Covid19AreaReportApi(api_client)
query_params = {
'countryCode': "US",
}
header_params = {
'Authorization': "Bearer MY_ACCESS_TOKEN",
}
try:
api_response = api_instance.g_et_covid_report(
query_params=query_params,
header_params=header_params
)
pprint(api_response)
except openapi_client.ApiException as e:
print("Exception when calling Covid19AreaReportApi->g_et_covid_report: %s\n" % e)
I've been checking the library and also OpenAPI Generator project on GitHub but I still can't find a way to pass header parameters which I need in order to authorise my API call.
Related
I'm trying to "wrap" Google Python Client for AI Platform (Unified) into a Cloud Function.
import json
from google.cloud import aiplatform
from google.protobuf import json_format
from google.protobuf.struct_pb2 import Value
def infer(request):
"""Responds to any HTTP request.
Args:
request (flask.Request): HTTP request object.
Returns:
The response text or any set of values that can be turned into a
Response object using
`make_response <http://flask.pocoo.org/docs/1.0/api/#flask.Flask.make_response>`.
"""
request_json = request.get_json()
project="simple-1234"
endpoint_id="7106293183897665536"
location="europe-west4"
api_endpoint = "europe-west4-aiplatform.googleapis.com"
# The AI Platform services require regional API endpoints.
client_options = {"api_endpoint": api_endpoint}
# Initialize client that will be used to create and send requests.
# This client only needs to be created once, and can be reused for multiple requests.
client = aiplatform.gapic.PredictionServiceClient(client_options=client_options)
# for more info on the instance schema, please use get_model_sample.py
# and look at the yaml found in instance_schema_uri
endpoint = client.endpoint_path(
project=project, location=location, endpoint=endpoint_id
)
instance = request.json["instances"]
instances = [instance]
parameters_dict = {}
parameters = json_format.ParseDict(parameters_dict, Value())
try:
response = client.predict(endpoint=endpoint, instances=instances, parameters=parameters)
if 'error' in response:
return (json.dumps({"msg": 'Error during prediction'}), 500)
except Exception as e:
print("Exception when calling predict: ", e)
return (json.dumps({"msg": 'Exception when calling predict'}), 500)
print(" deployed_model_id:", response.deployed_model_id)
# See gs://google-cloud-aiplatform/schema/predict/prediction/tables_classification.yaml for the format of the predictions.
predictions = response.predictions
for prediction in predictions:
print(" prediction:", dict(prediction))
return (json.dumps({"prediction": response['predictions']}), 200)
When calling client.predict() I'm getting exception 400
{"error": "Required property Values is not found"}
What am I doing wrong?
I believe your parameters variable is not correct, in the documentation example that variable is set like this, as an example:
parameters = predict.params.ImageClassificationPredictionParams(
confidence_threshold=0.5, max_predictions=5,
).to_value()
This is probably why the error says the properties are not found. You will have to set your own parameters and then call the predict method.
The code that I included to call a API in AWS lambda is given below. urlilb3 python library is uploaded as a zip folder successfully. But when I try to access the particular intent it shows
When I included the API call in AWS lambda (python 3.6), I got
"The remote endpoint could not be called, or the response it returned was invalid" .
Why is it so? What are the prerequisites to be done before including the API calls in python 3.6. I used urllib3 python library and upload as zip folder.?? Is any other things required to do??
def get_weather(session):
should_end_session = False
speech_output = " "
reprompt_text = ""
api = "some url ...."
http = urllib3.PoolManager()
response = http.request('GET',api)
weather_status = json.loads(response.data.decode('utf-8'))
for weather in weather_status:
final_weather = weather["WeatherText"]
return build_response(session_attributes, build_speechlet_response(speech_output, reprompt_text, should_end_session))
Scenario : To obtain weather using an third party API
import urllib3
def get_weather():
api = "some url ...."
http = urllib3.PoolManager()
response = http.request('GET',api)
weather_status = json.loads(response.data.decode('utf-8'))
for weather in weather_status:
final_weather = weather["WeatherText"] ## The attribute "WeatherText" will varies depending upon the weather API you are using.
return final_weather
get_weather() # simple function call
Try printing response.data so you can see it in the logs. That might give you a clue. I would also try to switch to Python Requests instead of URLLib3. You may also need to set the Content Type depending on the implementation of the API you're calling.
from __future__ import print_function
import json
from botocore.vendored import requests
def lambda_handler(event, context):
print('received request: ' + str(event))
doctor_intent = event['currentIntent']['slots']['doctor']
email_intent = event['currentIntent']['slots']['email']
print(doctor_intent, email_intent)
print(type(doctor_intent), type(email_intent))
utf8string = doctor_intent.encode("utf-8")
utf8string1 = email_intent.encode("utf-8")
print(type(utf8string))
print(type(utf8string1))
car1 = {"business_name": utf8string , "customer_email": utf8string1 }
r = requests.post('https://postgresheroku.herokuapp.com/update',
json=car1)
#print ("JSON : ", r.json())
print(r.json())
data = str(r.json())
print(type(data))
return {
"dialogAction": {
"type": "Close",
"fulfillmentState": "Fulfilled",
"message": {
"contentType": "PlainText",
"content": "Thank you for booking appointment with {doctor}
{response}".format(doctor=doctor_intent,response=data)
}
}
}
I am working a project, where in I am suppose to get some user input through web application, and send that data to cloudant DB. I am using python for the use case. Below is the sample code:
import requests
import json
dict_key ={}
key = frozenset(dict_key.items())
doc={
{
"_ID":"1",
"COORD1":"1,1",
"COORD2":"1,2",
"COORD3":"2,1",
"COORD4":"2,2",
"AREA":"1",
"ONAME":"abc",
"STYPE":"black",
"CROPNAME":"paddy",
"CROPPHASE":"initial",
"CROPSTARTDATE":"01-01-2017",
"CROPTYPE":"temp",
"CROPTITLE":"rice",
"HREADYDATE":"06-03-2017",
"CROPPRICE":"1000",
"WATERRQ":"1000",
"WATERSRC":"borewell"
}
}
auth = ('uid', 'pwd')
headers = {'Content-type': 'application/json'}
post_url = "server_IP".format(auth[0])
req = requests.put(post_url, auth=auth,headers=headers, data=json.dumps(doc))
#req = requests.get(post_url, auth=auth)
print json.dumps(req.json(), indent=1)
When I am running the code, I am getting the below error:
"WATERSRC":"borewell"
TypeError: unhashable type: 'dict'
I searched a bit, and found below stackflow link as a prospective resolution
TypeError: unhashable type: 'dict'
It says that "To use a dict as a key you need to turn it into something that may be hashed first. If the dict you wish to use as key consists of only immutable values, you can create a hashable representation of it like this:
key = frozenset(dict_key.items())"
I have below queries:
1) I have tried using it in my code above,but I am not sure if I have used it correctly.
2) To put the data in the cloudant DB, I am using Python module "requests". In the code, I am using the below line to put the data in the DB:
req = requests.put(post_url, auth=auth,headers=headers, data=json.dumps(doc))
But I am getting below error:
"reason": "Only GET,HEAD,POST allowed"
I searched on that as well, and I found IBM BLuemix document about it as follows
https://console.ng.bluemix.net/docs/services/Cloudant/basics/index.html#cloudant-basics
As I referred the document, I can say that I am using the right option. But may be I am wrong.
If you are adding a document to the database and you know the the _id, then you need to do an HTTP POST. Here's some slightly modified code:
import requests
import json
doc={
"_id":"2",
"COORD1":"1,1",
"COORD2":"1,2",
"COORD3":"2,1",
"COORD4":"2,2",
"AREA":"1",
"ONAME":"abc",
"STYPE":"black",
"CROPNAME":"paddy",
"CROPPHASE":"initial",
"CROPSTARTDATE":"01-01-2017",
"CROPTYPE":"temp",
"CROPTITLE":"rice",
"HREADYDATE":"06-03-2017",
"CROPPRICE":"1000",
"WATERRQ":"1000",
"WATERSRC":"borewell"
}
auth = ('admin', 'admin')
headers = {'Content-type': 'application/json'}
post_url = 'http://localhost:5984/mydb'
req = requests.post(post_url, auth=auth,headers=headers, data=json.dumps(doc))
print json.dumps(req.json(), indent=1)
Notice that
the _id field is supplied in the doc and is lower case
the request call is a POST not a PUT
the post_url contains the name of the database being written to - in this case mydb
N.B in the above example I am writing to local CouchDB, but replacing the URL with your Cloudant URL and adding correct credentials should get this working for you.
I am trying to publish a machine learning model on Azure webservice using python. I am able to deploy the code successfully but when i try to call it through the URL, it's throwing me 'Azure' module doesn't exist. The code basically retrieves a TFIDF model from the container (blob) and use it to predict the new value. The error clearly says, Azure package is missing while trying to run on the webservice and I am not sure how to fix it. Here goes the code:
For deployment:
from azureml import services
from azure.storage.blob import BlobService
#services.publish('7c94eb2d9e4c01cbe7ce1063','f78QWNcOXHt9J+Qt1GMzgdEt+m3NXby9JL`npT7XX8ZAGdRZIX/NZ4lL2CkRkGQ==')
#services.types(res=unicode)
#services.returns(str)
def TechBot(res):
from azure.storage.blob import BlobService
from gensim.similarities import SparseMatrixSimilarity, MatrixSimilarity, Similarity
blob_service = BlobService(account_name='tfidf', account_key='RU4R/NIVPsPOoR0bgiJMtosHJMbK1+AVHG0sJCHT6jIdKPRz3cIMYTsrQ5BBD5SELKHUXgBHNmvsIlhEdqUCzw==')
blob_service.get_blob_to_path('techbot',"2014.csv","df")
df=pd.read_csv("df")
doct = res
To access the url I used the python code from
service.azureml.net
import urllib2
import json
import requests
data = {
"Inputs": {
"input1":
[
{
'res': "wifi wnable",
}
],
},
"GlobalParameters": {
}
}
body = str.encode(json.dumps(data))
#proxies = {"http":"http://%s" % proxy}
url = 'http://ussouthcentral.services.azureml.net/workspaces/7c94eb2de26a45399e4c01cbe7ce1063/services/11943e537e0741beb466cd91f738d073/execute?api-version=2.0&format=swagger'
api_key = '8fH9kp67pEt3C6XK9sXDLbyYl5cBNEwYg9VY92xvkxNd+cd2w46sF1ckC3jqrL/m8joV7o3rsTRUydkzRGDYig==' # Replace this with the API key for the web service
headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)}
#proxy_support = urllib2.ProxyHandler(proxies)
#opener = urllib2.build_opener(proxy_support, urllib2.HTTPHandler(debuglevel=1))
#urllib2.install_opener(opener)
req = urllib2.Request(url, body, headers)
try:
response = urllib2.urlopen(req, timeout=60)
result = response.read()
print(result)
except urllib2.HTTPError, error:
print("The request failed with status code: " + str(error.code))
# Print the headers - they include the requert ID and the timestamp, which are useful for debugging the failure
print(error.info())
print(json.loads(error.read()))
The string 'res' will be predicted at the end. As I said it runs perfectly fine if I run as it is in python by calling azure module, problem happens when I access the url.
Any help is appreciated, please let me know if you need more information (I only sohwcased half of my code)
I tried to reproduce the issue via POSTMAN, then I got the error information below as you said.
{
"error": {
"code": "ModuleExecutionError",
"message": "Module execution encountered an error.",
"details": [
{
"code": "85",
"target": "Execute Python Script RRS",
"message": "Error 0085: The following error occurred during script evaluation, please view the output log for more information:\r\n---------- Start of error message from Python interpreter ----------\r\nCaught exception while executing function: Traceback (most recent call last):\n File \"\\server\\InvokePy.py\", line 120, in executeScript\n outframe = mod.azureml_main(*inframes)\n File \"\\temp\\1280677032.py\", line 1094, in azureml_main\n File \"<ipython-input-15-bd03d199b8d9>\", line 6, in TechBot_2\nImportError: No module named azure\n\r\n\r\n---------- End of error message from Python interpreter ----------"
}
]
}
}
According to the error code 00085 & the information ImportError: No module named azure, I think the issue was caused by importing python moduleazure-storage. There was a similar SO thread Access Azure blog storage from within an Azure ML experiment which got the same issue, I think you can refer to its answer try to use HTTP protocol instead HTTPS in your code to resolve the issue as the code client = BlobService(STORAGE_ACCOUNT, STORAGE_KEY, protocol="http").
Hope it helps. Any concern & update, please feel free to let me know.
Update: Using HTTP protocol for BlobService
from azureml import services
from azure.storage.blob import BlobService
#services.publish('7c94eb2d9e4c01cbe7ce1063','f78QWNcOXHt9J+Qt1GMzgdEt+m3NXby9JL`npT7XX8ZAGdRZIX/NZ4lL2CkRkGQ==')
#services.types(res=unicode)
#services.returns(str)
def TechBot(res):
from azure.storage.blob import BlobService
from gensim.similarities import SparseMatrixSimilarity, MatrixSimilarity, Similarity
# Begin: Update code
# Using `HTTP` protocol for BlobService
blob_service = BlobService(account_name='tfidf',
account_key='RU4R/NIVPsPOoR0bgiJMtosHJMbK1+AVHG0sJCHT6jIdKPRz3cIMYTsrQ5BBD5SELKHUXgBHNmvsIlhEdqUCzw==',
protocol='http')
# End
blob_service.get_blob_to_path('techbot',"2014.csv","df")
df=pd.read_csv("df")
doct = res
this is a test script to request data from Rovi API, provided by the API itself.
test.py
import requests
import time
import hashlib
import urllib
class AllMusicGuide(object):
api_url = 'http://api.rovicorp.com/data/v1.1/descriptor/musicmoods'
key = 'my key'
secret = 'secret'
def _sig(self):
timestamp = int(time.time())
m = hashlib.md5()
m.update(self.key)
m.update(self.secret)
m.update(str(timestamp))
return m.hexdigest()
def get(self, resource, params=None):
"""Take a dict of params, and return what we get from the api"""
if not params:
params = {}
params = urllib.urlencode(params)
sig = self._sig()
url = "%s/%s?apikey=%s&sig=%s&%s" % (self.api_url, resource, self.key, sig, params)
resp = requests.get(url)
if resp.status_code != 200:
# THROW APPROPRIATE ERROR
print ('unknown err')
return resp.content
from another script I import the module:
from roviclient.test import AllMusicGuide
and create an instance of the class inside a mood function:
def mood():
test = AllMusicGuide()
print (test.get('[moodids=moodids]'))
according to documentation, the following is the syntax for requests:
descriptor/musicmoods?apikey=apikey&sig=sig [&moodids=moodids] [&format=format] [&country=country] [&language=language]
but running the script I get the following error:
unknown err
<h1>Gateway Timeout</h1>:
what is wrong?
"504, try once more. 502, it went through."
Your code is fine, this is a network issue. "Gateway Timeout" is a 504. The intermediate host handling your request was unable to complete it. It made its own request to another server on your behalf in order to handle yours, but this request took too long and timed out. Usually this is because of network congestion in the backend; if you try a few more times, does it sometimes work?
In any case, I would talk to your network administrator. There could be any number of reasons for this and they should be able to help fix it for you.