How to set the SQS message class in boto3? - python

I'm migrating from boto to boto3.
The following snippet sets the message class to my sqs:
conn = boto.sqs.connect_to_region(my_region)
queue = conn.create_queue(queue_name)
queue.set_message_class(boto.sqs.message.RawMessage)
How to do this with boto3?

You need to create SQS Client and use it. You don't need to set RawMessage class anymore.
import boto3
client = boto3.client('sqs')
response = client.send_message(
QueueUrl='string',
MessageBody='string',
DelaySeconds=123,
MessageAttributes={
'string': {
'StringValue': 'string',
'BinaryValue': b'bytes',
'StringListValues': [
'string',
],
'BinaryListValues': [
b'bytes',
],
'DataType': 'string'
}
},
MessageDeduplicationId='string',
MessageGroupId='string'
)
Source: https://boto3.readthedocs.io/en/latest/reference/services/sqs.html#SQS.Client.send_message

Related

Use AWS Lambda and SES to send an email

I ran the following script which works on my computer but doesnt work in AWS Lambda. All I did was added "def lambda_handler(event, context): return event" function as its required by the lambda. I ran the Lambda test a few times.
I am not getting the SES email and there's no errors when I execute it in Lambda. Any idea why?
import boto3
from botocore.exceptions import ClientError
SENDER = "Sender Name <testu#test.com>"
RECIPIENT = "test#test.com"
CONFIGURATION_SET = "ConfigSet"
AWS_REGION = "ap-southeast-2"
SUBJECT = "Amazon SES Test (SDK for Python)"
BODY_TEXT = ("Amazon SES Test (Python)\r\n"
"This email was sent with Amazon SES using the "
"AWS SDK for Python (Boto)."
)
BODY_HTML = """<html>
</html>
"""
def lambda_handler(event, context):
return event
# The character encoding for the email.
CHARSET = "UTF-8"
# Create a new SES resource and specify a region.
client = boto3.client('ses',region_name=AWS_REGION)
try:
response = client.send_email(
Destination={
'ToAddresses': [
RECIPIENT,
],
},
Message={
'Body': {
'Html': {
'Charset': CHARSET,
'Data': BODY_HTML,
},
'Text': {
'Charset': CHARSET,
'Data': BODY_TEXT,
},
},
'Subject': {
'Charset': CHARSET,
'Data': SUBJECT,
},
},
Source=SENDER,
ConfigurationSetName=CONFIGURATION_SET,
)
except ClientError as e:
print(e.response['Error']['Message'])
else:
print(response['MessageId'])
Just like what Anon Coward said, you have to perform the SES sending inside the handler function and put the return statement at the bottom of that function.
It should look something like this:
def lambda_handler(event, context):
response = client.send_email(PAYLOAD_HERE_)
return {
"statusCode": 200,
"headers": {
"Content-Type": "application/json"
},
"body": json.dumps({
"Region ": json_region
})
}

AWS Lambda function handler missing

I created the following lambda function and its given me the following error. I am new to Python and I dont think there's anything missing in the function. can someone please help to make this function work? Thanks
import logging
import json
import boto3
client = boto3.client('workspaces')
def handler(event, context):
response = create_workspace()
return event
def create_workspace():
response = client.create_workspaces(
Workspaces=[
{
'DirectoryId': 'd-9767328a34',
'UserName': 'Ken',
'BundleId': 'wsb-6cdbk8901',
'WorkspaceProperties': {
'RunningMode': 'AUTO_STOP'
},
'Tags': [
{
'Key': 'Name',
'Value': 'CallCentreProvisioned'
},
]
},
]
)
Execution result
{
"errorMessage": "Handler 'lambda_handler' missing on module 'lambda_function'",
"errorType": "Runtime.HandlerNotFound",
"requestId": "a09fd219-b262-4226-a04b-4d26c1b7281f",
"stackTrace": []
}
Very simple. Rename handler to lambda_handler, or change your lambda configuration to use the handler called handler rather than lambda_handler. There were comments that mentioned this, but no simple answer given.
There is no good reason to nest the function as the other answer seems to suggest.
You should have this syntax:
def handler_name(event, context):
// paste your code here
return some_value
I think you should be like this. And look at the Naming paragraph.:
import logging
import json
import boto3
def lambda_handler(event, context):
client = boto3.client('workspaces')
response = create_workspace()
return event
def create_workspace():
response = client.create_workspaces(
Workspaces=[
{
'DirectoryId': 'd-9767328a34',
'UserName': 'Ken',
'BundleId': 'wsb-6cdbk8901',
'WorkspaceProperties': {
'RunningMode': 'AUTO_STOP'
},
'Tags': [
{
'Key': 'Name',
'Value': 'CallCentreProvisioned'
},
]
},
]
)

Amazon Lex V2 Lambda python code to get intent name or any response back from lambda is not working

'''
Trying to get response back as name of the intent from lambda for Amazon Lex v2. It can be string or any response back in simple program.
I have referred the V2 Lex documentation but I can come-up with below code which shows error after several attempts. https://docs.aws.amazon.com/lexv2/latest/dg/lambda.html
error : "Invalid Lambda Response: Received error response from Lambda: Unhandled"
'''
def lambda_handler(event, context):
entity = event["currentIntent"]["slots"]["Nm"].title()
intent = event["currentIntent"]["name"]
response = {
'sessionState': {
'dialogAction': {
'type': 'Close'
},
'state': 'Fulfilled'
},
'messages': [
'contentType': 'PlainText',
'content': "The intent you are in now is "+intent+"!"
],
}
return response
The 'messages' field is an array of objects, not an array of strings. It should be declared as follows:
'messages': [
{
'contentType': 'PlainText',
'content': "The intent you are in now is "+intent+"!"
}
]
Reference:
Amazon Lex - Lambda Response format
I faced the same issue. The solution that worked for me is like below
var response = {};
response.messages = [
message
];
response.sessionState = {
sessionAttributes:sessionAttributes,
intent : {
name : intentRequest.interpretations[0].intent.name,
state : 'Fulfilled'
},
dialogAction: {
type: "Close",
fulfillmentState: "Fulfilled"
}
};
Refer to lex v2 developer guide page 69 Response format
https://docs.aws.amazon.com/lexv2/latest/dg/lex2.0.pdf

Find EC2 instances that not equal to X - AWS CLI

I'm looking to find instances that is not equal to platform "Windows" and tag them with specific tags.
For now i have this script that is tagging the instances that are equal to platform "Windows":
import boto3
ec2 = boto3.client('ec2')
response = ec2.describe_instances(Filters=[{'Name' : 'platform', 'Values' : ['windows']}])
instances = response['Reservations']
for each_res in response['Reservations']:
for each_inst in each_res['Instances']:
for instance in instances:
response = ec2.create_tags(
Resources=[each_inst['InstanceId']],
Tags = [
{
'Key' : 'test',
'Value': 'test01'
}
]
)
I need help to add a block to this script that will add another tag only to EC2 instance that is NOT equal to platform "Windows".
Try this. Working for me. Also, Running create_tags inside the for loop, you are executing one API for each resource. Whereas create_tags supports multiple resource as input. Reference : https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ec2.html#EC2.Client.create_tags
import boto3
#Initialize an empty list to store non windows instance IDs.
list_nonwindows = []
ec2 = boto3.client("ec2", region_name="us-east-1")
response = ec2.describe_instances()
instances = response["Reservations"]
for each_res in response["Reservations"]:
for each_inst in each_res["Instances"]:
if each_inst.get('Platform') == None:
instance_s = each_inst.get('InstanceId')
list_nonwindows.append(instance_s)
response = ec2.create_tags(
Resources=list_nonwindows,
Tags = [
{
'Key' : 'test',
'Value': 'test01'
}
]
)
Just remove the filter and iterate over all the instances and inside the code add an if condition on the platform key.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import boto3
ec2 = boto3.client("ec2", region_name="eu-central-1")
response = ec2.describe_instances()
instances = response["Reservations"]
for each_res in response["Reservations"]:
for each_inst in each_res["Instances"]:
platform = each_inst.get('Plaform')
instance_id = each_inst.get('InstanceId')
if platform == 'Windows':
response = ec2.create_tags(
Resources=[instance_id],
Tags = [
{
'Key' : 'test',
'Value': 'test01'
}
]
)
else:
print(f'found non windows intance: {instance_id}')
response = ec2.create_tags(
Resources=[instance_id],
Tags = [
{
'Key' : 'nonwindow',
'Value': 'nonwindowvalue'
}
]
)
As per the API docs
The value is Windows for Windows instances; otherwise blank.
Code is working correctly I tested:
$ python3 describe_instances.py
found non windows intance: i-0ba1a62801c895
describe_instances
Response structure received from the describe_instnaces call
{
'Reservations': [
{
'Groups': [
{
'GroupName': 'string',
'GroupId': 'string'
},
],
'Instances': [
{
'AmiLaunchIndex': 123,
'ImageId': 'string',
'InstanceId': 'string',
....
'Platform': 'Windows',
'PrivateDnsName': 'string',
'PrivateIpAddress': 'string',
'ProductCodes': [
....

Trigger python script on EMR from Lambda on S3 Object arrival with Object details [duplicate]

I want to execute spark submit job on AWS EMR cluster based on the file upload event on S3. I am using AWS Lambda function to capture the event but I have no idea how to submit spark submit job on EMR cluster from Lambda function.
Most of the answers that i searched talked about adding a step in the EMR cluster. But I do not know if I can add add any step to fire "spark submit --with args" in the added step.
You can, I had to same thing last week!
Using boto3 for Python (other languages would definitely have a similar solution) you can either start a cluster with the defined step, or attach a step to an already up cluster.
Defining the cluster with the step
def lambda_handler(event, context):
conn = boto3.client("emr")
cluster_id = conn.run_job_flow(
Name='ClusterName',
ServiceRole='EMR_DefaultRole',
JobFlowRole='EMR_EC2_DefaultRole',
VisibleToAllUsers=True,
LogUri='s3n://some-log-uri/elasticmapreduce/',
ReleaseLabel='emr-5.8.0',
Instances={
'InstanceGroups': [
{
'Name': 'Master nodes',
'Market': 'ON_DEMAND',
'InstanceRole': 'MASTER',
'InstanceType': 'm3.xlarge',
'InstanceCount': 1,
},
{
'Name': 'Slave nodes',
'Market': 'ON_DEMAND',
'InstanceRole': 'CORE',
'InstanceType': 'm3.xlarge',
'InstanceCount': 2,
}
],
'Ec2KeyName': 'key-name',
'KeepJobFlowAliveWhenNoSteps': False,
'TerminationProtected': False
},
Applications=[{
'Name': 'Spark'
}],
Configurations=[{
"Classification":"spark-env",
"Properties":{},
"Configurations":[{
"Classification":"export",
"Properties":{
"PYSPARK_PYTHON":"python35",
"PYSPARK_DRIVER_PYTHON":"python35"
}
}]
}],
BootstrapActions=[{
'Name': 'Install',
'ScriptBootstrapAction': {
'Path': 's3://path/to/bootstrap.script'
}
}],
Steps=[{
'Name': 'StepName',
'ActionOnFailure': 'TERMINATE_CLUSTER',
'HadoopJarStep': {
'Jar': 's3n://elasticmapreduce/libs/script-runner/script-runner.jar',
'Args': [
"/usr/bin/spark-submit", "--deploy-mode", "cluster",
's3://path/to/code.file', '-i', 'input_arg',
'-o', 'output_arg'
]
}
}],
)
return "Started cluster {}".format(cluster_id)
Attaching a step to an already running cluster
As per here
def lambda_handler(event, context):
conn = boto3.client("emr")
# chooses the first cluster which is Running or Waiting
# possibly can also choose by name or already have the cluster id
clusters = conn.list_clusters()
# choose the correct cluster
clusters = [c["Id"] for c in clusters["Clusters"]
if c["Status"]["State"] in ["RUNNING", "WAITING"]]
if not clusters:
sys.stderr.write("No valid clusters\n")
sys.stderr.exit()
# take the first relevant cluster
cluster_id = clusters[0]
# code location on your emr master node
CODE_DIR = "/home/hadoop/code/"
# spark configuration example
step_args = ["/usr/bin/spark-submit", "--spark-conf", "your-configuration",
CODE_DIR + "your_file.py", '--your-parameters', 'parameters']
step = {"Name": "what_you_do-" + time.strftime("%Y%m%d-%H:%M"),
'ActionOnFailure': 'CONTINUE',
'HadoopJarStep': {
'Jar': 's3n://elasticmapreduce/libs/script-runner/script-runner.jar',
'Args': step_args
}
}
action = conn.add_job_flow_steps(JobFlowId=cluster_id, Steps=[step])
return "Added step: %s"%(action)
AWS Lambda function python code if you want to execute Spark jar using spark submit command:
from botocore.vendored import requests
import json
def lambda_handler(event, context):
headers = { "content-type": "application/json" }
url = 'http://ip-address.ec2.internal:8998/batches'
payload = {
'file' : 's3://Bucket/Orchestration/RedshiftJDBC41.jar
s3://Bucket/Orchestration/mysql-connector-java-8.0.12.jar
s3://Bucket/Orchestration/SparkCode.jar',
'className' : 'Main Class Name',
'args' : [event.get('rootPath')]
}
res = requests.post(url, data = json.dumps(payload), headers = headers, verify = False)
json_data = json.loads(res.text)
return json_data.get('id')

Categories

Resources