I have created an api gateway from my existing api using boto3 import command.
apiClient = boto3.client('apigateway', awsregion)
api_response=apiClient.import_rest_api
(
failOnWarnings=True,
body=open('apifileswagger.json', 'rb').read()
)
But i cant modify integration request. I tried with following Boto3 command.
apiClient = boto3.client('apigateway', awsregion)
api_response=apiClient.put_integration
(
restApiId=apiName,
resourceId='/api/v1/hub',
httpMethod='GET',
integrationHttpMethod='GET',
type='AWS',
uri='arn:aws:lambda:us-east-1:141697213513:function:test-lambda',
)
But I got error like this
Unexpected error: An error occurred () when calling the PutIntegration operation:
I need to change lambda function region & name using Boto3 command. is it possible? .
if it is possible what is the actual issue with this command?
In the put_integration() call listed above, your restApiId and resourceId look incorrect. Here's what you should do.
After importing your rest API, check to see if it is available by calling your apiClient's get_rest_apis(). If the API was imported correctly, you should see it listed in the response along with the API's ID (which is generated by AWS). Capture this ID for future operations.
Next, you'll need to look at all of the resources associated with this API by calling your apiClient's get_resources(). Capture the resource ID for the resource you wish to modify.
Using the API ID and resource ID, check to see if an integration config exists by calling your apiClient's get_integration(). If it does exist you can modify the integration request by calling update_integration(); if it does not exist, you need to create a new integration by calling put_integration() and passing the integration request as a parameter.
Here's an example of how that might look in code:
# Import API
api_response1 = apiClient.import_rest_api(failOnWarnings=True, body=open('apifileswagger.json', 'rb').read())
print(api_response1)
# Get API ID
api_response2 = apiClient.get_rest_apis()
for endpoint in api_response2['items']:
if endpoint['name'] == "YOUR_API_NAME":
api_ID = endpoint['id']
# Get Resource ID
api_response3 = apiClient.get_resources(restApiId=api_ID)
for resource in api_response3['items']:
if resource['path'] == "YOUR_PATH":
resource_ID = resource['id']
# Check for Existing Integrations
api_response4 = apiClient.get_integration(restApiId=api_ID, resourceId=resource_ID , httpMethod='GET')
print(api_response4)
# Create Integration with Request
integration_request = { 'application/json': '{\r\n "body" : $input.json(\'$\'),\r\n}' }
api_response5 = apiClient.put_integration(restApiId=api_ID, resourceId=resource_ID , httpMethod='GET', type='AWS',
integrationHttpMethod='GET', uri="YOUR_LAMBDA_URI", requestTemplates=integration_request)
print(api_response5)
All the methods listed above are explained in the Boto3 Documentation found here.
As with most API Gateway updates to API definitions, in order to update an integration request, you have to do a PATCH and pass a body with a patch document using the expected format. See documentation here
Related
I'm trying to call a Google Cloud function from within Python using the following:
import requests
url = "MY_CLOUD_FUNCTON_URL"
data = {'name': 'example'}
response = requests.post(url, data = data)
but I get back the error: Your client does not have permission to get URL MY_CLOUD_FUNCTON from this server
Does anyone know how I can avoid this error? I am assuming I should be passing credentials as part of the request somehow?
Also note that if I instead try to call the function via gcloud from the command line like the below then it works, but i want to do this from within python
gcloud functions call MY_CLOUD_FUNCTON --data '{"name": "example"}'
Any help would be really appreciated!
Given a working Cloud Function in HTTP mode which requires authentication in order to be triggered.
You need to generate an authentication token and insert it in the header as shown below:
import os
import json
import requests
import google.oauth2.id_token
import google.auth.transport.requests
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = './my-service-account.json'
request = google.auth.transport.requests.Request()
audience = 'https://mylocation-myprojectname.cloudfunctions.net/MyFunctionName'
TOKEN = google.oauth2.id_token.fetch_id_token(request, audience)
r = requests.post(
'https://mylocation-myprojectname.cloudfunctions.net/MyFunctionName',
headers={'Authorization': f"Bearer {TOKEN}", "Content-Type": "application/json"},
data=json.dumps({"key": "value"}) # possible request parameters
)
r.status_code, r.reason
You have a few options here. Either open the function to the public so that anyone can call it or take the more secure route, albeit necessitating a bit more steps. I will cover the 2nd option since it's the one I would suggest for security reasons, but should you be satisfied with simply opening the function to the public ( which is especially useful if you are trying to create a public endpoint after all ), see this documentation.
If you want to limit who can invoke your GCF however, you would have to perform a few more steps.
Create a service account and give it the Cloud Functions Invoker role ( if you simply want to restrict it's permissions to only invoke the GCF )
After you assign the Service Account a role(s), the next page will give you the option to create a key
After creating the Service Account Key and downloading it as credentials.json, the next step is straightforward. You would simply populate the environment variable GOOGLE_APPLICATION_CREDENTIALS with the path to the credentials.json file.
Once these steps are done, you can simply invoke the GCF as you did before, only this time, it will invoke it as the service account that you created, which contained all the permissions necessary to invoke a GCF.
This may be obvious to many, but to add to Marco's answer (I can't comment yet):
Make sure to install the google-auth package, not the google package. More details in the documentation and the requirements.txt for the code on GitHub.
I have a situation where I am trying to create two Cloud Functions namely CF1 & CF2 and I have one Cloud Scheduler. Both cloud functions are having authenticated invocation enabled. My flow is Cloud Scheduler will trigger CF1. On completion of CF1, the CF1 will trigger CF2 as a http call. I have referred Cannot invoke Google Cloud Function from GCP Scheduler to access authenticated CF1 from Cloud Scheduler and able to access CF1. But I am getting problem when accessing CF2 from CF1. The CF1 does not trigger CF2 and also not giving any error message. Do we need to follow any other technique when accessing authenticated Cloud Function from another authenticated Cloud Function.
CF1 code:
import json
import logging
from requests_futures.sessions import FuturesSession
def main(request):
# To read parameter values from request (url arguments or Json body).
raw_request_data = request.data
string_request_data = raw_request_data.decode("utf-8")
request_json: dict = json.loads(string_request_data)
request_args = request.args
if request_json and 'cf2_endpoint' in request_json:
cf2_endpoint = request_json['cf2_endpoint']
elif request_args and 'cf2_endpoint' in request_args:
cf2_endpoint = request_args['cf2_endpoint']
else:
cf2_endpoint = 'Invalid endpoint for CF2'
logger = logging.getLogger('test')
try:
session = FuturesSession()
session.get("{}".format(cf2_endpoint))
logger.info("First cloud function executed successfully.")
except RuntimeError:
logger.error("Exception occurred {}".format(RuntimeError))
CF2 code:
import logging
def main(request):
logger = logging.getLogger('test')
logger.info("second cloud function executed successfully.")
Current output logs:
First cloud function executed successfully.
Expected output logs:
First cloud function executed successfully.
second cloud function executed successfully.
Note: Same flow is working if I use unauthenticated access to the both cloud functions.
Two things are happening here:
You're not using request-futures entirely correctly. Since the request is made asynchronously, you need to block on the result before the function implicitly returns, otherwise it might return before your HTTP request completes (although it probably is in this example):
session = FuturesSession()
future = session.get("{}".format(cf2_endpoint))
resp = future.result() # Block on the request completing
The request you're making to the second function is not actually an authenticated request. Outbound requests from a Cloud Function are not authenticated by default. If you looked at what the actual response is above, you would see:
>>> resp.status_code
403
>>> resp.content
b'\n<html><head>\n<meta http-equiv="content-type" content="text/html;charset=utf-8">\n<title>403 Forbidden</title>\n</head>\n<body text=#000000 bgcolor=#ffffff>\n<h1>Error: Forbidden</h1>\n<h2>Your client does not have permission to get URL <code>/function_two</code> from this server.</h2>\n<h2></h2>\n</body></html>\n'
You could jump through a lot of hoops to properly authenticate this request, as detailed in the docs: https://cloud.google.com/functions/docs/securing/authenticating#function-to-function
However, a better alternative would be to make your second function a "background" function and invoke it via a PubSub message published from the first function instead:
from google.cloud import pubsub
publisher = pubsub.PublisherClient()
topic_name = 'projects/{project_id}/topics/{topic}'.format(
project_id=<your project id>,
topic='MY_TOPIC_NAME', # Set this to something appropriate.
)
def function_one(request):
message = b'My first message!'
publisher.publish(topic_name, message)
def function_two(event, context):
message = event['data'].decode('utf-8')
print(message)
As long as your functions have the permissions to publish PubSub messages, this avoids the need to add authorization to the HTTP requests, and also ensures at-least-once delivery.
Google Cloud Function provide REST API interface what include call method that can be used in another Cloud Function HTTP invokation.
Although the documentation mention using Google-provided client libraries there is still non one for Cloud Function on Python.
And instead you need to use general Google API Client Libraries. [This is the python one].3
Probably, the main difficulties while using this approach is an understanding of authentification process.
Generally you need provide two things to build a client service:
credentials ans scopes.
The simpliest way to get credentials is relay on Application Default Credentials (ADC) library. The rigth documentation about that are:
https://cloud.google.com/docs/authentication/production
https://github.com/googleapis/google-api-python-client/blob/master/docs/auth.md
The place where to get scopes is the each REST API function documentation page.
Like, OAuth scope: https://www.googleapis.com/auth/cloud-platform
The complete code example of calling 'hello-world' clound fucntion is below.
Before run:
Create default Cloud Function on GCP in your project.
Keep and notice the default service account to use
Keep the default body.
Notice the project_id, function name, location where you deploy function.
If you will call function outside Cloud Function environment (locally for instance) setup the environment variable GOOGLE_APPLICATION_CREDENTIALS according the doc mentioned above
If you will call actualy from another Cloud Function you don't need to configure credentials at all.
from googleapiclient.discovery import build
from googleapiclient.discovery_cache.base import Cache
import google.auth
import pprint as pp
def get_cloud_function_api_service():
class MemoryCache(Cache):
_CACHE = {}
def get(self, url):
return MemoryCache._CACHE.get(url)
def set(self, url, content):
MemoryCache._CACHE[url] = content
scopes = ['https://www.googleapis.com/auth/cloud-platform']
# If the environment variable GOOGLE_APPLICATION_CREDENTIALS is set,
# ADC uses the service account file that the variable points to.
#
# If the environment variable GOOGLE_APPLICATION_CREDENTIALS isn't set,
# ADC uses the default service account that Compute Engine, Google Kubernetes Engine, App Engine, Cloud Run,
# and Cloud Functions provide
#
# see more on https://cloud.google.com/docs/authentication/production
credentials, project_id = google.auth.default(scopes)
service = build('cloudfunctions', 'v1', credentials=credentials, cache=MemoryCache())
return service
google_api_service = get_cloud_function_api_service()
name = 'projects/{project_id}/locations/us-central1/functions/function-1'
body = {
'data': '{ "message": "It is awesome, you are develop on Stack Overflow language!"}' # json passed as a string
}
result_call = google_api_service.projects().locations().functions().call(name=name, body=body).execute()
pp.pprint(result_call)
# expected out out is:
# {'executionId': '3h4c8cb1kwe2', 'result': 'It is awesome, you are develop on Stack Overflow language!'}
I need to create a new user in azure devops using the python client library for Azure DevOps REST API.
I wrote the following code:
from azure.devops.connection import Connection
from azure.devops.v5_0.graph.models import GraphUserCreationContext
from msrest.authentication import BasicAuthentication
credentials = BasicAuthentication('', personal_access_token)
connection = Connection(base_url=organization_url, creds=credentials)
graph_client = connection.clients_v5_0.get_graph_client()
addAADUserContext = GraphUserCreationContext("anaya.john#mydomain.com")
print(addAADUserContext)
resp = graph_client.create_user(addAADUserContext)
print(resp)
I get the output:
{'additional_properties': {}, 'storage_key': 'anaya.john#dynactionize.onmicrosoft.com'}
And an error occurs while calling the create_user method:
azure.devops.exceptions.AzureDevOpsServiceError: VS860015: Must have exactly one of originId or principalName set.
Actually what i should pass a GraphUserPrincipalNameCreationContext to the create_user function.
I found a .NET sample which does this in a function named AddRemoveAADUserByUPN() :
https://github.com/microsoft/azure-devops-dotnet-samples/blob/master/ClientLibrary/Samples/Graph/UsersSample.cs
GraphUserPrincipalNameCreationContext is an interface in this sample. But python doesn't support interfaces.
So how can implement this in python?
Some of the classes like GraphUserPrincipalNameCreationContext aren't currently available in the python client API. They are working on it. You can track the issue here in GitHub repo:
https://github.com/microsoft/azure-devops-python-api/issues/176
You can use User Entitlements - Add REST API for azure devops instead of it's Graph API. You can use the following python client for this purpose:
https://github.com/microsoft/azure-devops-python-api/tree/dev/azure-devops/azure/devops/v5_0/member_entitlement_management
You can refer to the sample given in the following question to know about how to use the mentioned python client :
Unable to deserialize to object: type, KeyError: ' key: int; value: str '
Am beginner to Amazon web services.
I have a below lambda python function
import sys
import logging
import pymysql
import json
rds_host=".amazonaws.com"
name="name"
password="123"
db_name="db"
port = 3306
def save_events(event):
result = []
conn = pymysql.connect(rds_host, user=name, passwd=password, db=db_name,
connect_timeout=30)
with conn.cursor(pymysql.cursors.DictCursor) as cur:
cur.execute("select * from bodyPart")
result = cur.fetchall()
cur.close()
print ("Data from RDS...")
print (result)
cur.close()
bodyparts = json.dumps(result)
bodyParts=(bodyparts.replace("\"", "'"))
def lambda_handler(event, context):
save_events(event)
return bodyParts
using an above function am sending json to the client using API gateway, now suppose user selects an item from the list and send it back, in form of json where will i get http request and how should i process that request
I just made an additional information for #Harsh Manvar.
The easiest way I think is you can use
api-gateway-proxy-integration-lambda
Currently API Gateway support AWS lambda very good, you can pass request body (json) by using event.body to your lambda function.
I used it everyday in my hobby project (a Slack command bot, it is harder because you need to map from application/x-www-form-urlencoded to json through mapping template)
And for you I think it is simple because you using only json as request and response. The key is you should to select Integratiton type to Lambda function
You can take some quick tutorials in Medium.com for more detail, I only link the docs from Amazon.
#mohith: Hi man, I just made a simple approach for you, you can see it here.
The first you need to create an API (see the docs above) then link it to your Lambda function, because you only use json, so you need to check the named Use Lambda Proxy integration like this:
Then you need to deploy it!
Then in your function, you can handle your code, in my case, I return all the event that is passed to my function like this:
Finally you can post to your endpoint, I used postman in my case:
I hope you get my idea, when you successfully deployed your API then you can do anything with it in your front end.
I suggest you research more about CloudWatch, when you work with API Gateway, Lambda, ... it is Swiss army knife, you can not live without it, it is very easy for tracing and debug your code.
Please do not hesitate to ask me anything.
you can use aws service called API-gateway it will give you endpoint for http api requests.
this api gateway make connection with your lambda and you can pass http request to lambda.
here sharing info about creating rest api on lambda you can check it out : https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-create-api.html
aws also provide example for lambda GET, POST lambda example.you just have to edit code it will automatically make api-gateway.as reference you can check it.
From Lambda Console > create function > choose AWS serverless repository > in search bar type "get" and search > api-lambda-dynamodb > it will take value from user and process in lambda.
here sharing link you can direct check examples : https://console.aws.amazon.com/lambda/home?region=us-east-1#/create?tab=serverlessApps
I'm trying to use boto3 to query my CloudSearch domain using the docs as a guide: http://boto3.readthedocs.io/en/latest/reference/services/cloudsearchdomain.html#client
import boto3
import json
boto3.setup_default_session(profile_name='myprofile')
cloudsearch = boto3.client('cloudsearchdomain')
response = cloudsearch.search(
query="(and name:'foobar')",
queryParser='structured',
returnFields='address',
size=10
)
print( json.dumps(response) )
...but it fails with:
botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "https://cloudsearchdomain.eu-west-1.amazonaws.com/2013-01-01/search"
But how am I supposed to set or configure the endpoint or domain that I want to connect to? I tried adding an endpoint parameter to the request, thinking maybe it was an accidental omission from the docs, but I got this error response:
Unknown parameter in input: "endpoint", must be one of: cursor, expr, facet, filterQuery, highlight, partial, query, queryOptions, queryParser, return, size, sort, start, stats
The docs say:
The endpoint for submitting Search requests is domain-specific. You submit search requests to a domain's search endpoint. To get the search endpoint for your domain, use the Amazon CloudSearch configuration service DescribeDomains action. A domain's endpoints are also displayed on the domain dashboard in the Amazon CloudSearch console.
I know what my search endpoint is, but how do I supply it?
I found a post on a Google forum with the answer. You have to add the endpoint_url parameter into the client constructor e.g.
client = boto3.client('cloudsearchdomain', endpoint_url='http://...')
I hope those docs get updated, because I wasted a lot of time before I figured that out.
import boto3
client = boto3.client('cloudsearchdomain',
aws_access_key_id= 'access-key',
aws_secret_access_key= 'some-secret-key',
region_name = 'us-east-1', # your chosen region
endpoint_url= 'cloudsearch-url'
# endpoint_url is your Search Endpoint as defined in AWS console
)
response = client.search(
query='Foo', # your search string
size = 10
)
Reference response['hits'] for returned results.