I need to update the version of my lambda function in API Gateway with boto3, test:3 to test:4.
From what I understood, I would need to use the update_integration() function in boto3 and update with patchOperations, but I don't understand how as there isn't a lot of documentation about it.
client = boto3.client(
'apigateway',
aws_access_key_id='access_key',
aws_secret_access_key='secret_key',
region_name = 'us-east-1'
)
response = client.update_integration(...) # something here?
Any suggestion?
------- EDIT -------
arn = 'arn:aws:apigateway:us-east-1:lambda:path/2015-03-
31/functions/{lambdaFunctionArn}'
response = client.update_integration(
restApiId= '{restApiId}',
resourceId= '{resourceId}',
httpMethod='POST',
patchOperations=[
{
'op': 'replace',
'path': '/uri',
'value': arn
}
]
)
I found that solution on this forum: https://forums.aws.amazon.com/thread.jspa?messageID=694264򩟸 but I keep getting this error:
botocore.errorfactory.BadRequestException: An error occurred
(BadRequestException) when calling the UpdateIntegration operation:
Role ARN must be specified for AWS integrations
I found the solution. I should have added /invocations at the end of arn:
arn = 'arn:aws:apigateway:us-east-1:lambda:path/2015-03-
31/functions/{lambdaFunctionArn}/invocations'
response = client.update_integration(
restApiId= '{restApiId}',
resourceId= '{resourceId}',
httpMethod='POST',
patchOperations=[
{
'op': 'replace',
'path': '/uri',
'value': arn
}
]
)
Related
I have the following setup:
Python Flask API with boto3 installed. I create a boto3 client like so:
client = boto3.client(
"ses",
region_name='eu-west-1',
aws_access_key_id='myAccessKeyID',
aws_secret_access_key='mySecretAccessKey'
)
Then I try to send an email like so:
try:
client.send_email(
Source='VerifiedSourceEmail',
Destination={
'ToAddresses': ['VerifiedRecipientEmail'],
'CcAddresses': [],
'BccAddresses': [],
},
Message={
'Subject': {'Data': 'Test'},
'Body': {'Text': {'Data': 'Test'},
}
except ClientError as e:
return {
'ErrorCode': e.response['Error']['Code'],
'ErrorMessage': e.response['Error']['Message'],
}
When I try to do this, I get:
ErrorCode: SignatureDoesNotMatch
ErrorMessage: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.
I have used the exact same client creation process when connecting to my S3 bucket:
client = boto3.client(
"s3",
region_name='eu-west-1',
aws_access_key_id='myS3AccessKeyID',
aws_secret_access_key='myS3SecretAccessKey'
)
...and this client works fine, I have tested both gets, uploads and deletes with no errors.
I have tested sending a mail directly in AWS, this works. What am I doing wrong?
From this answer:
The keys to be provided to send Emails are not "SMTP Credentials" .
The keys are instead Global access key which can be retrieved
http://docs.amazonwebservices.com/ses/latest/GettingStartedGuide/GetAccessIDs.html.
I have installed Hashicorp Vault in the AWS EC2 server and trying to access the secret keys stored in the vault using AWS Lambda function using "AWS Authentication". But I am getting the following error.
{u'errors': [u'error performing token check: failed to look up namespace from the token: no namespace']}
I am basically doing the following
Installed Hashicorp Vault in EC2 server, enabled AWS authentication
Set up policy & role in the Vault using an AWS role
Created a Python-based AWS Lambda function that is generating signed AWS STS request
Using the signed request, successfully login to the vault server. The response is as follows.
{
u'lease_id': u'',
u'warnings': [
u'TTL of "768h" exceeded the effective max_ttl of "500h"; TTL value is capped accordingly'
],
u'wrap_info': None,
u'auth': {
u'token_policies': [
u'default',
u'examplepolicy'
],
u'orphan': True,
u'entity_id': u'xxxxxxxxxxxxxx',
u'token_type': u'service',
u'lease_duration': 1800000,
u'policies': [
u'default',
u'examplepolicy'
],
u'client_token': u'xxxxxxxxxxxxxx',
u'accessor': u'xxxxxxxxxxxxxx',
u'renewable': True,
u'metadata': {
u'auth_type': u'iam',
u'account_id': u'xxxxxxxxxxxxxx',
u'role_id': u'xxxxxxxxxxxxxx'
}
},
u'lease_duration': 0,
u'request_id': u'xxxxxxxxxxxxxx',
u'data': None,
u'renewable': False
}
Now using the client-token from above response trying to fetch a secret key as shown below
secretKey = requests.get(url1,headers = {"X-Vault-Token": clienttoken})
I am getting following error immediately after executing the above line.
{u'errors': [u'error performing token check: failed to look up namespace from the token: no namespace']}
instead of "X-Vault-Token: token" header use "Authorization: Bearer token"
I have this error when try get/put commands in CLI hashicorp:
Error making API request.
URL: GET http://127.0.0.1:8200/v1/sys/internal/ui/mounts/secret
Code: 500. Errors:
* error performing token check: failed to look up namespace from the token: no namespace
Error was that I'm used "" when set VAULT_TOKEN in env variable
I have a list of AWS accounts with account ids. I want to fetch the list of all the EC2 instances running in each of those accounts. Can I fetch that list using boto3?
First, you can access your AWS account programmatically by creating boto3 client.
import boto3
client = boto3.client(
's3',
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY,
aws_session_token=SESSION_TOKEN,
)
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html#method-parameters
After creating boto3 client with your credentials, You can use boto3's EC2.Client.describe_instances()
response = client.describe_instances(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
InstanceIds=[
'string',
],
DryRun=True|False,
MaxResults=123,
NextToken='string'
)
Use 'instance-state-name' : 'running' in Filters, like:
{
'Name': 'instance-state-name',
'Values': [
'running',
]
}
then you can get informations of your running ec2 instances.
To access AWS Account from CLI/Programs, you need to have 'aws_access_key_id_value' and 'aws_secret_access_key_value'. Once you have these details, you can use the following code to list EC2 instances from the AWS Account you specified keys of.
Create config.properties with the following code.
aws_access_key_id_value='YOUR-ACCESS-KEY-OF-THE-AWS-ACCOUNT'
aws_secret_access_key_value='YOUR-SECRETE-KEY-OF-THE-AWS-ACCOUNT'
Create list-ec2-instances.py with the following code.
import boto3
def getVarFromFile(filename):
import imp
f = open(filename)
global data
data = imp.load_source('data', '', f)
f.close()
getVarFromFile('config.properties')
ec2client = boto3.resource(
'ec2',
aws_access_key_id=data.aws_access_key_id_value,
aws_secret_access_key=data.aws_secret_access_key_value
)
for instance in ec2client.instances.all():
print(
"Id: {0}\nPlatform: {1}\nType: {2}\nPublic IPv4: {3}\nAMI: {4}\nState: {5}\n".format(
instance.id, instance.platform, instance.instance_type, instance.public_ip_address, instance.image.id, instance.state
)
)
Execute the following command.
python list-ec2-instances.py
I want to create ~267 Cloudwatch alarms, manual process is so pathetic, can someone guide me to use Boto3 script so that I can set up all alarms in a one shot.
import boto3
# Create CloudWatch client
cloudwatch = boto3.client('cloudwatch')
# Create alarm
cloudwatch.put_metric_alarm(
AlarmName='Web_Server_CPU_Utilization',
ComparisonOperator='GreaterThanThreshold',
EvaluationPeriods=1,
MetricName='CPUUtilization',
Namespace='AWS/EC2',
Period=60,
Statistic='Average',
Threshold=70.0,
ActionsEnabled=False,
AlarmDescription='Alarm when server CPU exceeds 70%',
Dimensions=[
{
'Name': 'InstanceId',
'Value': 'i-xxxxxxxxxx'
},
],
Unit='Seconds'
)
Assuming you want to add a CloudWatch alarm for different EC2 instances, you can simply put the instance IDs in a list and iterate over that list to create the alarms. That'd look like:
import boto3
cloudwatch = boto3.client('cloudwatch')
ec2_instances = [
'i-xxxxxxxxx1',
'i-xxxxxxxxx2',
'i-xxxxxxxxx3'
]
for ec2_instance in ec2_instances:
cloudwatch.put_metric_alarm(
AlarmName='Web_Server_CPU_Utilization_%s' % ec2_instance,
ComparisonOperator='GreaterThanThreshold',
EvaluationPeriods=1,
MetricName='CPUUtilization',
Namespace='AWS/EC2',
Period=60,
Statistic='Average',
Threshold=70.0,
ActionsEnabled=False,
AlarmDescription='Alarm when server CPU exceeds 70%',
Dimensions=[
{
'Name': 'InstanceId',
'Value': ec2_instance
},
],
Unit='Seconds'
)
Here is a simple script I use to set up CloudWatch alarms on my running EC2 instances. The aim is to reboot my EC2 instances if StatusCheckFailed_Instance is True.
In case you are getting the "Insufficient Data" message as well, its worthwhile creating the same alarm on the EC2 console and then making sure your put_metric_alarm call matches the source/CloudFormation JSON.
AWS seems to be really fussy about the JSON. Once I matched the EC2 console's JSON exactly it worked like a charm.
Hope this helps someone.
import boto3
# Specify your region here
region = "ap-northeast-1"
ec2_client = boto3.client("ec2", region_name=region)
cloudwatch = boto3.client('cloudwatch')
# Get running EC2 instances
reservations = ec2_client.describe_instances(Filters=[
{
"Name": "instance-state-name",
"Values": ["running"],
}
]).get("Reservations")
# Set up an alarm for each instance
for reservation in reservations:
for instance in reservation["Instances"]:
instance_id = instance['InstanceId']
cloudwatch.put_metric_alarm(
AlarmName=f'Status_Check_{instance_id}',
AlarmDescription=f'Alarm when status check fails on {instance_id}',
ActionsEnabled=True,
OKActions=[],
AlarmActions=[
f"arn:aws:automate:{region}:ec2:reboot"
],
InsufficientDataActions=[],
MetricName='StatusCheckFailed_Instance',
Namespace='AWS/EC2',
Statistic='Maximum',
Dimensions=[
{
'Name': 'InstanceId',
'Value': instance_id
},
],
Period=60,
EvaluationPeriods=2,
DatapointsToAlarm=2,
Threshold=0.99,
ComparisonOperator='GreaterThanOrEqualToThreshold'
)
I can not figure out how to launch an EC2 instance in Boto3 with a specified IAM role.
Here is some sampe code of how I have been able to successfully create an instance so far:
import boto3
ec2 = boto3.resource('ec2', region_name='us-west-2')
ec2.create_instances(ImageId='ami-1e299d7e', InstanceType='t2.micro',\
MinCount=1, MaxCount=1, SecurityGroupIds=['Mysecuritygroup'], KeyName='mykeyname')
Note: Some Boto3 versions accept either Arn or Name but all versions accept Name. I suggest using the role name only.
IamInstanceProfile={
'Arn': 'string',
'Name': 'string'
}
If your profile name is ExampleInstanceProfile and the ARN is arn:aws:iam::123456789012:instance-profile/ExampleInstanceProfile
ec2.create_instances(ImageId='ami-1e299d7e',
InstanceType='t2.micro',
MinCount=1, MaxCount=1,
SecurityGroupIds=['Mysecuritygroup'],
KeyName='mykeyname',
IamInstanceProfile={
'Arn': 'arn:aws:iam::123456789012:instanceprofile/ExampleInstanceProfile'
'Name': 'ExampleInstanceProfile'
})
Just an addition to the great answer by helloV (I can not comment due to reputation limitations). I encountered the same error message of "The parameter iamInstanceProfile.name may not be used in combination with iamInstanceProfile.arn. So only one key is allowed. I experimented with both and using
IamInstanceProfile={ 'Name': 'ExampleInstanceProfile' }
works for me, but not using
IamInstanceProfile={'Arn':'arn:aws:iam::123456789012:instanceprofile/ExampleInstanceProfile'}
I am using boto3 version 1.4.4