hi I want to add few properties to my api-gateway code like response body . I have added this using aws panel like this manual configurations done using aws panel
basically want to add a response body of 200 response and it's content type would be application/json and Models will be empty. so any way to do this using cdk python ?
see apigateway.MethodResponse in docs
from aws_cdk import aws_apigateway as apigateway
# model: apigateway.Model
method_response = apigateway.MethodResponse(
status_code="statusCode",
# the properties below are optional
response_models={
"response_models_key": model
},
response_parameters={
"response_parameters_key": False
})
Related
I have an API Gateway defined in the python cdk that will accept CURL Restful requests to upload / read / delete files from an S3 bucket:
api = api_gw.RestApi(self, "file-api",
rest_api_name="File REST Service")
file = api.root.add_resource("{id}")
get_files_integration = api_gw.LambdaIntegration(handler,
request_templates={"application/json": '{ "statusCode": "200" }'})
post_file_integration = api_gw.LambdaIntegration(handler)
get_file_integration = api_gw.LambdaIntegration(handler)
delete_file_integration = api_gw.LambdaIntegration(handler)
api.root.add_method("GET", get_files_integration, authorization_type=api_gw.AuthorizationType.COGNITO, authorizer=auth)
file.add_method("POST", post_file_integration); # POST /{id}
file.add_method("GET", get_file_integration); # GET /{id}
file.add_method("DELETE", delete_file_integration); # DELETE /{id}
Is it possible to enable CORS on the API Gateway so that it will perform pre-flight checks and allow external access from a localhost on another machine?
I have attempted to use the existing add_core_preflight() method defined in the documentation I can find but believe this may no longer be valid as of CDK 2.0.
Yes, IResource.add_cors_preflight() does exactly this.
You can also specify default CORS config with the default_cors_preflight_options attribute of RestApi.
Here are the examples from the docs. They're in Typescript, but it will work the same in Python.
The following example will enable CORS for all methods and all origins on all resources of the API:
new apigateway.RestApi(this, 'api', {
defaultCorsPreflightOptions: {
allowOrigins: apigateway.Cors.ALL_ORIGINS,
allowMethods: apigateway.Cors.ALL_METHODS // this is also the default
}
})
The following example will add an OPTIONS method to the myResource API resource, which only allows GET and PUT HTTP requests from the origin https://amazon.com.
declare const myResource: apigateway.Resource;
myResource.addCorsPreflight({
allowOrigins: [ 'https://amazon.com' ],
allowMethods: [ 'GET', 'PUT' ]
});
def sender_edit_view(self, authenticationMethod=None, envelopeId='',
returnUrl=''):
if not self.account_url:
self.login_information()
url = '/accounts/{accountId}/envelopes/{envelopeId}/views/edit' \
.format(accountId=self.account_id,
envelopeId=envelopeId)
if authenticationMethod is None:
authenticationMethod = 'none'
data = {
'authenticationMethod': authenticationMethod,
'returnUrl': returnUrl,
}
return self.post(url, data=data, expected_status_code=201)
pydocusign.exceptions.DocuSignException: DocuSign request failed: GET https://demo.docusign.net/restapi/v2/accounts/9286679/envelopes/http://127.0.0.1:5000/views/edit returned code 404 while expecting code 201; Message: ;
i want to redirect to sender view with the UI.
https://github.com/peopledoc/pydocusign/tree/master/pydocusign
trying to use from the above pydocusign.
The pydocusign library is not from DocuSign. You're welcome to use it but we (DocuSign folks) can't provide advice about it.
Instead, I suggest that you check out the DocuSign SDK for Python.
There is an example app that includes many example workflows. Workflow example 1 shows how to create an embedded signing ceremony.
Added
For embedded sending, see Workflow example 11.
Am beginner to Amazon web services.
I have a below lambda python function
import sys
import logging
import pymysql
import json
rds_host=".amazonaws.com"
name="name"
password="123"
db_name="db"
port = 3306
def save_events(event):
result = []
conn = pymysql.connect(rds_host, user=name, passwd=password, db=db_name,
connect_timeout=30)
with conn.cursor(pymysql.cursors.DictCursor) as cur:
cur.execute("select * from bodyPart")
result = cur.fetchall()
cur.close()
print ("Data from RDS...")
print (result)
cur.close()
bodyparts = json.dumps(result)
bodyParts=(bodyparts.replace("\"", "'"))
def lambda_handler(event, context):
save_events(event)
return bodyParts
using an above function am sending json to the client using API gateway, now suppose user selects an item from the list and send it back, in form of json where will i get http request and how should i process that request
I just made an additional information for #Harsh Manvar.
The easiest way I think is you can use
api-gateway-proxy-integration-lambda
Currently API Gateway support AWS lambda very good, you can pass request body (json) by using event.body to your lambda function.
I used it everyday in my hobby project (a Slack command bot, it is harder because you need to map from application/x-www-form-urlencoded to json through mapping template)
And for you I think it is simple because you using only json as request and response. The key is you should to select Integratiton type to Lambda function
You can take some quick tutorials in Medium.com for more detail, I only link the docs from Amazon.
#mohith: Hi man, I just made a simple approach for you, you can see it here.
The first you need to create an API (see the docs above) then link it to your Lambda function, because you only use json, so you need to check the named Use Lambda Proxy integration like this:
Then you need to deploy it!
Then in your function, you can handle your code, in my case, I return all the event that is passed to my function like this:
Finally you can post to your endpoint, I used postman in my case:
I hope you get my idea, when you successfully deployed your API then you can do anything with it in your front end.
I suggest you research more about CloudWatch, when you work with API Gateway, Lambda, ... it is Swiss army knife, you can not live without it, it is very easy for tracing and debug your code.
Please do not hesitate to ask me anything.
you can use aws service called API-gateway it will give you endpoint for http api requests.
this api gateway make connection with your lambda and you can pass http request to lambda.
here sharing info about creating rest api on lambda you can check it out : https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-create-api.html
aws also provide example for lambda GET, POST lambda example.you just have to edit code it will automatically make api-gateway.as reference you can check it.
From Lambda Console > create function > choose AWS serverless repository > in search bar type "get" and search > api-lambda-dynamodb > it will take value from user and process in lambda.
here sharing link you can direct check examples : https://console.aws.amazon.com/lambda/home?region=us-east-1#/create?tab=serverlessApps
I have a GCMLE model deployed as a model for GCMLE prediction service. In the GCP console UI I can navigate to the model name -> version and then I can view the Model location (ie gs://..../) which specifies the location of my saved_model.pb file. Is there any way to dynamically obtain this "model location" within a jupyter notebook/python script? I would like to be able to then use this model location to download the saved_model.pb locally so that I can load the model into a local session for debugging purposes on specific inference tasks. Currently, I can do all of this manually, but I'd like to be able to quickly flip between model versions without needing to manually track/download the saved_model.pb files.
Issuing a GET request (docs) against your version resource will return a Version object that has the deploymentUri field you are looking for. You can use any library for issuing HTTP request in your language of choice, although you do need to send an authorization token in the headers.
Here's an example in Python:
import requests
from oauth2client.client import GoogleCredentials
token = GoogleCredentials.get_application_default().get_access_token().access_token
api = 'https://ml.googleapis.com/v1/projects/MYPROJECT/models/MYMODEL/versions/MYVERSION'
headers = {'Authorization': 'Bearer ' + token }
response = requests.get(api, headers=headers)
print response.json()['deploymentUri']
I have created an api gateway from my existing api using boto3 import command.
apiClient = boto3.client('apigateway', awsregion)
api_response=apiClient.import_rest_api
(
failOnWarnings=True,
body=open('apifileswagger.json', 'rb').read()
)
But i cant modify integration request. I tried with following Boto3 command.
apiClient = boto3.client('apigateway', awsregion)
api_response=apiClient.put_integration
(
restApiId=apiName,
resourceId='/api/v1/hub',
httpMethod='GET',
integrationHttpMethod='GET',
type='AWS',
uri='arn:aws:lambda:us-east-1:141697213513:function:test-lambda',
)
But I got error like this
Unexpected error: An error occurred () when calling the PutIntegration operation:
I need to change lambda function region & name using Boto3 command. is it possible? .
if it is possible what is the actual issue with this command?
In the put_integration() call listed above, your restApiId and resourceId look incorrect. Here's what you should do.
After importing your rest API, check to see if it is available by calling your apiClient's get_rest_apis(). If the API was imported correctly, you should see it listed in the response along with the API's ID (which is generated by AWS). Capture this ID for future operations.
Next, you'll need to look at all of the resources associated with this API by calling your apiClient's get_resources(). Capture the resource ID for the resource you wish to modify.
Using the API ID and resource ID, check to see if an integration config exists by calling your apiClient's get_integration(). If it does exist you can modify the integration request by calling update_integration(); if it does not exist, you need to create a new integration by calling put_integration() and passing the integration request as a parameter.
Here's an example of how that might look in code:
# Import API
api_response1 = apiClient.import_rest_api(failOnWarnings=True, body=open('apifileswagger.json', 'rb').read())
print(api_response1)
# Get API ID
api_response2 = apiClient.get_rest_apis()
for endpoint in api_response2['items']:
if endpoint['name'] == "YOUR_API_NAME":
api_ID = endpoint['id']
# Get Resource ID
api_response3 = apiClient.get_resources(restApiId=api_ID)
for resource in api_response3['items']:
if resource['path'] == "YOUR_PATH":
resource_ID = resource['id']
# Check for Existing Integrations
api_response4 = apiClient.get_integration(restApiId=api_ID, resourceId=resource_ID , httpMethod='GET')
print(api_response4)
# Create Integration with Request
integration_request = { 'application/json': '{\r\n "body" : $input.json(\'$\'),\r\n}' }
api_response5 = apiClient.put_integration(restApiId=api_ID, resourceId=resource_ID , httpMethod='GET', type='AWS',
integrationHttpMethod='GET', uri="YOUR_LAMBDA_URI", requestTemplates=integration_request)
print(api_response5)
All the methods listed above are explained in the Boto3 Documentation found here.
As with most API Gateway updates to API definitions, in order to update an integration request, you have to do a PATCH and pass a body with a patch document using the expected format. See documentation here