I'm completely new with AWS Lex and AWS Lambda, so it's a beginner question.
I'm trying to create a bot that helps a user choose bikes. The first question is what type of bike the user needs and the answers are men/women/unisex/kids.
I'm trying to write a lambda validation function so that the bot will tell the user when the choice he entered is none of those mentioned above.
The error I receieve is:
An error has occurred: Invalid Lambda Response: Received error
response from Lambda (LambdaRequestId:
471e1df7-034c-46e7-9154-23de79cb16cf; Error: Unhandled)
And this is the code:
import json
def get_slots(intent_request):
return intent_request['currentIntent']['slots']
def valid_bike_type(intent_request):
bike_type=get_slots(intent_request)['BikeType']
bike_types_lst = ['men', 'women', 'unisex', 'kids']
if bike_type is not None and bike_type.lower() not in bike_types_lst:
return build_validation_result(False, 'BikeType', 'We only have bikes for Men/Women/Unisex/Kids.')
return build_validation_result(True, None, None)
def dispatch(intent_request):
intent_name = intent_request['currentIntent']['name']
if intent_name == 'WelcomeIntent':
return valid_bike_type(intent_request)
raise Exception('Intent with name' + intent_name + ' not supported.')
def lambda_handler(event, context):
return dispatch(event)
Thank you very much for any help!
You can just use print as you would usually do on your workstation, e.g.:
def lambda_handler(event, context):
print(event)
return dispatch(event)
The outcome of print, or any other function that prints to stdout, will be in CloudWatch Logs as explained in AWS docs:
Lambda automatically integrates with CloudWatch Logs and pushes all logs from your code to a CloudWatch Logs group associated with a Lambda function, which is named /aws/lambda/.
For Lex, you can also check documentation about Lambda's input and output, or check avaiable blueprints.
found some information on AWS docs about unhandled errors:
I would check first the lambda settings: Memory and Timeout (as the default is only 3 sec).
You can edit these settings using AWS console, in the lambda configuration, under the 'Basic Settings' tab.
Related
I am developing an API for an app with python, using FastAPI, serverless and amazon web services.
We are using CloudWatch for saving our logs.
The thing is, I am required to send different logs to different groups in the same aplication, depending on wethever is it an error, an info, etc.
Let´s say, I have two log groups in CloudWatch: /aws/lambda/firstGroup and /aws/lambda/secondGroup.
An I have this function:
def foo(some_data):
logger.info(f'calling the function with the data: {data}') # this goes to logGroup /aws/lambda/firstGroup
try:
doSomething()
except:
logger.error('ERROR! Something happened') # this goes to logGroup /aws/lambda/secondGroup
How can I configure the serverless.yml file so the logger.info goes to the first group and the logger.error goes to the second group?
Thanks in advance!
I am using this solution with ec2 instance.
create a log group
create log stream
then dump your logs
import boto3
import time
# init clients
clw_client = boto3.client('logs', region_name=REGION)
# print("check else create new log group..")
try:
clw_client.create_log_group(logGroupName=LOG_GROUP)
except clw_client.exceptions.ResourceAlreadyExistsException:
pass
# print("check else create new log stream.....")
LOG_STREAM = '{}-{}'.format(time.strftime("%m-%d-%Y-%H-%M-%S"),'logstream')
try:
clw_client.create_log_stream(logGroupName=LOG_GROUP, logStreamName=LOG_STREAM)
except clw_client.exceptions.ResourceAlreadyExistsException:
pass
def log_update(text):
print(text)
response = clw_client.describe_log_streams(
logGroupName = LOG_GROUP,
logStreamNamePrefix = LOG_STREAM
)
try:
event_log = {
'logGroupName': LOG_GROUP,
'logStreamName': LOG_STREAM,
'logEvents': [{
'timestamp': int(round(time.time() * 1000)),
'message': f"{time.strftime('%Y-%m-%d %H:%M:%S')}\t {text}"
}
],
}
if 'uploadSequenceToken' in response['logStreams'][0]:
event_log.update({'sequenceToken': response['logStreams'][0] ['uploadSequenceToken']})
response = clw_client.put_log_events(**event_log)
except Exception as e:
log_update(e)
Then use call that function inside your app whenever you like. Just don't check for groups and stream again and again for one job, these should run once.
You can update it, add more logic like change log-group name to implement what you wanted in this question with just some if else statements. Good luck!
I have a lambda function that will read an excel file and do some stuffs and then store the result in a different S3 bucket.
def lambda_handler(event, context):
try:
status = int(event['status'])
if status:
Reading_Bucket_Name = 'read-data-s3'
Writing_Bucket_Name = 'write-excel-file-bucket'
rowDataFile = 'Analyse.xlsx'
HTMLfileName = 'index.html'
url = loading_function(Reading_Bucket_Name=Reading_Bucket_Name, Writing_Bucket_Name=Writing_Bucket_Name, rowDataFile=rowDataFile, HTMLfileName=HTMLfileName)
status = 0
return {"statusCode": 200, "URL": url}
else:
return {"statusCode": 400, "Error": "The code could not be executed"}
except Exception as e:
print('#________ An error occurred while reading Status code int(event[status]) ________#')
print(e)
raise e
return None
The code is only supposed to work once! And that it returns the URL and then turns off and exit the Lambda function.
But the problem is: I will get the first output, and then the lambda function will call itself again! And it will go to the exception and execute it at least many times! Because there is no event['status']. 'This must be received if I call this function by:
{
"status": "1"
}
How can I stop execution after getting the first output?
Update:
This will cause the problem by uploading a new file to an S3 bucket:
s3_client = boto3.client('s3')
fig.write_html('/tmp/' + HTMLfileName, auto_play=False)
response = s3_client.upload_file('/tmp/' + HTMLfileName, Writing_Bucket_Name, HTMLfileName, ExtraArgs={'ACL':'public-read', 'ContentType':'text/html'})
return True
Given that the Lambda function appears to be running when a new object is created in the Amazon S3 bucket, it would appear that the bucket has been configured with an Event Notification that is triggering the AWS Lambda function.
To check this, go the to bucket in the S3 management console, go to the Properties tab and scroll down to Event notifications. Look for any configured events that trigger a Lambda function.
I am very new to Azure Function Apps and OAuth so please bear with me.
My Setup
I have an Azure Function App with a simple python-function doing nothing else but printing out the request headers:
import logging
import azure.functions as func
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
name = req.params.get('name')
if not name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
name = req_body.get('name')
if name:
aadIdToken = req.headers.get('X-MS-TOKEN-AAD-ID-TOKEN')
aadAccessToken = req.headers.get('X-MS-TOKEN-AAD-ACCESS-TOKEN')
principalID = req.headers.get('X-MS-CLIENT-PRINCIPAL-ID')
principalName = req.headers.get('X-MS-CLIENT-PRINCIPAL-NAME')
idProviderId = req.headers.get('X-MS-CLIENT-PRINCIPAL-IDP')
aadRefreshToken = req.headers.get('X-MS-TOKEN-AAD-REFRESH-TOKEN')
clientPrincipal = req.headers.get('X-MS-CLIENT-PRINCIPAL')
result = "\n"
myDict = sorted(dict(req.headers))
for key in myDict:
result += f"{key} = {dict(req.headers)[key]}\n"
return func.HttpResponse(
f"Hello, {name}. How are you ? Doing well ?"\
f"\n\nHere is some data concerning your Client principal:"\
f"\nThis is your X-MS-CLIENT-PRINCIPAL-ID: {principalID}"\
f"\nThis is your X-MS-CLIENT-PRINCIPAL-NAME: {principalName}"\
f"\nThis is your X-MS-CLIENT-PRINCIPAL-IDP: {idProviderId}"\
f"\nThis is your X-MS-CLIENT-PRINCIPAL: {clientPrincipal}"\
f"\n\nHere is some data concerning your AAD-token:"\
f"\nThis is your X-MS-TOKEN-AAD-ID-TOKEN: {aadIdToken}"\
f"\nThis is your X-MS-TOKEN-AAD-ACCESS-TOKEN: {aadAccessToken}"\
f"\nThis is your X-MS-TOKEN-AAD-REFRESH-TOKEN: {aadRefreshToken}"\
f"\n\n\nresult: {result}"\
)
else:
return func.HttpResponse(
"This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.",
status_code=200
)
I followed this guide to let the user authenticate via EasyAuth before calling the function.
This seems to work fine. When accessing the function via browser I am redirected to sign-in. After successful sign-in I am then redirected again and the HTTP response is printed out in the browser. As I am able to access X-MS-CLIENT-PRINCIPAL-ID and X-MS-CLIENT-PRINCIPAL-NAME I suppose the authentication was successful. However when printing out the whole request header I did not find a X-MS-TOKEN-AAD-REFRESH-TOKEN, X-MS-TOKEN-AAD-ACCESS-TOKEN or X-MS-TOKEN-AAD-ID-TOKEN.
This is the output (output too large; below the output shown in the screenshot I can see the header content):
First half of my output
My question
What I am trying to do now is to access the groups assigned to the logged-in user via the python code of the function to further authorize his request (e.g. "user can only execute the function when group xyz is assigned, else he will be prompted 'not allowed'").
To achieve this I added the "groups"-claim to the Token Configuration of my App Registration.
From what I understand accessing the user groups via a function coded in .NET is easily possible by using the ClaimsPrinciple object (source).
How would I be able to access the user assigned groups via python code?
Is that possible?
Am I understanding something completely wrong?
Followup:
One thing that I do not understand by now, is that I can see an id_token in the callback-http-request of the browser-debuggger when accessing the function via browser for the first time (to trigger sign in):
Browser debugger: id_token in callback-request
When I decrypted that token using jwt.io I was able to see some IDs of assigned user groups which seems to be exactly what I want to access via the python code.
Re-loading the page (I suppose the request then uses the already authenticated browser session) makes the callback disappear.
The header X-MS-CLIENT-PRINCIPAL contains the same claims as the id_token. So if we want to get the group claim, we can base64 decode the header.
For example
My code
import logging
import azure.functions as func
import base64
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
name = req.params.get('name')
if not name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
name = req_body.get('name')
if name:
aadAccessToken = req.headers.get('X-MS-TOKEN-AAD-ACCESS-TOKEN')
principalID = req.headers.get('X-MS-CLIENT-PRINCIPAL-ID')
principalName = req.headers.get('X-MS-CLIENT-PRINCIPAL-NAME')
idProviderId = req.headers.get('X-MS-CLIENT-PRINCIPAL-IDP')
aadRefreshToken = req.headers.get('X-MS-TOKEN-AAD-REFRESH-TOKEN')
clientPrincipal = req.headers.get('X-MS-CLIENT-PRINCIPAL')
clientPrincipal= base64.b64decode(clientPrincipal)
result = "\n"
myDict = sorted(dict(req.headers))
for key in myDict:
result += f"{key} = {dict(req.headers)[key]}\n"
return func.HttpResponse(
f"Hello, {name}. How are you ? Doing well ?"\
f"\n\nHere is some data concerning your Client principal:"\
f"\nThis is your X-MS-CLIENT-PRINCIPAL-ID: {principalID}"\
f"\nThis is your X-MS-CLIENT-PRINCIPAL-NAME: {principalName}"\
f"\nThis is your X-MS-CLIENT-PRINCIPAL-IDP: {idProviderId}"\
f"\nThis is your X-MS-CLIENT-PRINCIPAL: {clientPrincipal}"\
f"\n\nHere is some data concerning your AAD-token:"\
f"\nThis is your X-MS-TOKEN-AAD-ID-TOKEN: {aadIdToken}"\
f"\nThis is your X-MS-TOKEN-AAD-ACCESS-TOKEN: {aadAccessToken}"\
f"\nThis is your X-MS-TOKEN-AAD-REFRESH-TOKEN: {aadRefreshToken}"\
f"\n\n\nresult: {result}"\
)
else:
return func.HttpResponse(
"This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.",
status_code=200
)
i'm sending apple push notifications via AWS SNS via Lambda with Boto3 and Python.
from __future__ import print_function
import boto3
def lambda_handler(event, context):
client = boto3.client('sns')
for record in event['Records']:
if record['eventName'] == 'INSERT':
rec = record['dynamodb']['NewImage']
competitors = rec['competitors']['L']
for competitor in competitors:
if competitor['M']['confirmed']['BOOL'] == False:
endpoints = competitor['M']['endpoints']['L']
for endpoint in endpoints:
print(endpoint['S'])
response = client.publish(
#TopicArn='string',
TargetArn = endpoint['S'],
Message = 'test message'
#Subject='string',
#MessageStructure='string',
)
Everything works fine! But when an endpoint is invalid for some reason (at the moment this happens everytime i run a development build on my device, since i get a different endpoint then. This will be either not found or deactivated.) the Lambda function fails and gets called all over again. In this particular case if for example the second endpoint fails it will send the push over and over again to endpoint 1 to infinity.
Is it possible to ignore invalid endpoints and just keep going with the function?
Thank you
Edit:
Thanks to your help i was able to solve it with:
try:
response = client.publish(
#TopicArn='string',
TargetArn = endpoint['S'],
Message = 'test message'
#Subject='string',
#MessageStructure='string',
)
except Exception as e:
print(e)
continue
Aws lamdba on failure retries the function till the event expires from the stream.
In your case since the exception on the 2nd endpoint is not handled, the retry mechanism ensures the reexecution of post to the first endpoint.
If you handle the exception and ensure the function successfully ends even when there is a failure, then the retries will not happen.
Is it possible to check if a particular AWS IAM key has permissions for a set of specific commands?
Essentially, is there an API for AWS's privacy simulator?
So far I've been using hacks, such as executing a command with incorrect parameters that utilizes the permission in question, and watching what response I get back.
Example:
# needed resource: 'elasticloadbalancer:SetLoadBalancerListenerSSLCertificate'
# Check:
try:
elb.set_listener_SSL_certificate(443, 'fake')
except BotoServerError as e:
if e.error_code == 'AccessDenied':
print ("You don't have access to "
"elasticloadbalancer:SetLoadBalancerListenerSSLCertificate")
This is obviously hacky. Ideally I'd have some function call like iam.check_against(resource) or something. Any suggestions?
See boto3's simulate_principal_policy.
I've made this function to test for permissions (you'll need to modify it slightly, as it's not completely self-contained):
from typing import Dict, List, Optional
def blocked(
actions: List[str],
resources: Optional[List[str]] = None,
context: Optional[Dict[str, List]] = None
) -> List[str]:
"""test whether IAM user is able to use specified AWS action(s)
Args:
actions (list): AWS action(s) to validate IAM user can use.
resources (list): Check if action(s) can be used on resource(s).
If None, action(s) must be usable on all resources ("*").
context (dict): Check if action(s) can be used with context(s).
If None, it is expected that no context restrictions were set.
Returns:
list: Actions denied by IAM due to insufficient permissions.
"""
if not actions:
return []
actions = list(set(actions))
if resources is None:
resources = ["*"]
_context: List[Dict] = [{}]
if context is not None:
# Convert context dict to list[dict] expected by ContextEntries.
_context = [{
'ContextKeyName': context_key,
'ContextKeyValues': [str(val) for val in context_values],
'ContextKeyType': "string"
} for context_key, context_values in context.items()]
# You'll need to create an IAM client here
results = aws.iam_client().simulate_principal_policy(
PolicySourceArn=consts.IAM_ARN, # Your IAM user's ARN goes here
ActionNames=actions,
ResourceArns=resources,
ContextEntries=_context
)['EvaluationResults']
return sorted([result['EvalActionName'] for result in results
if result['EvalDecision'] != "allowed"])
You need to pass the permission's original action names to actions, like so:
blocked_actions = verify_perms.blocked(actions=[
"iam:ListUsers",
"iam:ListAccessKeys",
"iam:DeleteAccessKey",
"iam:ListGroupsForUser",
"iam:RemoveUserFromGroup",
"iam:DeleteUser"
])
Here's an example that uses the resources and context arguments as well:
def validate_type_and_size_allowed(instance_type, volume_size):
"""validate user is allowed to create instance with type and size"""
if validate_perms.blocked(actions=["ec2:RunInstances"],
resources=["arn:aws:ec2:*:*:instance/*"],
context={'ec2:InstanceType': [instance_type]}):
halt.err(f"Instance type {instance_type} not permitted.")
if validate_perms.blocked(actions=["ec2:RunInstances"],
resources=["arn:aws:ec2:*:*:volume/*"],
context={'ec2:VolumeSize': [volume_size]}):
halt.err(f"Volume size {volume_size}GiB is too large.")
The IAM Policy Simulator provides an excellent UI for determining which users have access to particular API calls.
If you wish to test this programmatically, use the DryRun parameter to make an API call. The function will not actually execute, but you will be informed whether it has sufficient permissions to execute. It will not, however, check whether the call itself would have succeeded (eg having an incorrect certificate name).