boto3 lambda script to shutdown RDS not working - python

I'm just starting out with boto3 and lambda and was trying to run the below function via Pycharm.
import boto3
client = boto3.client('rds')
response = client.stop_db_instance(
DBInstanceIdentifier='dummy-mysql-rds'
)
But i receive the below error:
botocore.errorfactory.DBInstanceNotFoundFault: An error occurred (DBInstanceNotFound) when calling the StopDBInstance operation: DBInstance dummy-mysql-rds not found.
Do you know what may be causing this?
For the record, I have the AWS toolkit installed for Pycharm and can run simple functions to list and describe ec2 instances and my AWS profile has admin access.

By explicitly defining the profile name the below function now works via Pycharm. Thank you #OleksiiDonoha for your help in getting this resolved.
import boto3
rds = boto3.setup_default_session(profile_name='dev')
client = boto3.client('rds')
response = client.stop_db_instance(
DBInstanceIdentifier='dev-mysql-rds'
)

Related

sqs boto3: The address 'https://us-west-2.queue.amazonaws.com/xxxx/my-name' is not valid for this endpoint

I'm having a very hard time trying to find out how to correctly configure sqs in boto3 to be able to send messages to my sqs queue. It looks like there is some confusion around boto3 and legacy endpoints but I'm getting the error message The address 'https://us-west-2.queue.amazonaws.com/xxxx/my-name' is not valid for this endpoint. for each permutation of the config I can imagine. Here's the code.
# Tried both of these
sqs_queue_url = 'https://sqs.us-west-2.amazonaws.com/xxxx/my-queue'
sqs_queue_url = 'https://us-west-2.queue.amazonaws.com/xxxx/my-queue'
# Tried both of these
sqs = boto3.client("sqs", endpoint_url="https://sqs.us-west-2.amazonaws.com")
sqs = boto3.client("sqs")
# _endpoint updates
logger.info("sqs endpoint: %s", sqs._endpoint)
# Keeps failing
sqs.send_message(QueueUrl=sqs_queue_url, MessageBody=message_json)
I'm hoping this is a silly mistake. What config am I missing?
From docs, AWS CLI and Python SDK use legacy endpoints:
If you use the AWS CLI or SDK for Python, you can use the following legacy endpoints.
Also, when you set endpoint you need to add https:
sqs = boto3.client("sqs", endpoint_url="https://us-west-2.queue.amazonaws.com")

Send command to Amazon EC2 instance using Boto3 in Python

I am able to create an instance from an image on Amazon EC2 using boto3 using the following code:
ec2 = boto3.client('ec2',region_name = 'eu-west-2')
instance = ec2.run_instances(
ImageId='ami-011c936382e4e2g9c',
MinCount=1,
MaxCount=1,
InstanceType = 't2.micro',
SecurityGroupIds= ['sg-0c08ad7b130e3hf3',],)
id = str(instance['Instances'][0]['InstanceId'])
This works fine, but I then wish to send a simple command to the instance which will execute a python script stored on the instance. From what I can gather, boto3 has the AWS command line functionality built in and so I shouldn't have to SSH to the instance; I should be able to send a command through boto3. However I'm strugging to do this, after trying different variations of the below code:
client = boto3.client('ssm',region_name='eu-west-2')
commands = ['echo "hello world" > hello.txt'] #this would be replaced by the command to execute the python script
instance_id = [id] #id being the instance id established from above
response = client.send_command(DocumentName='AWS-RunShellScript',
Parameters= 'commands':commands},
InstanceIds=instance_id,)
I'm aware that it takes time for the server to fire up etc but this isn't the problem. I have executed this second chunk of code after a large delay when I know the server is indeed ready to go.
As mentioned, I think this might be to do with the pem file that I normally need to use in order to putty/ssh into an instance as this isn't configured anywhere into my code. Any clues would be greatly appreciated!

Boto3 Error in AWS SDK: botocore.exceptions.NoCredentialsError: Unable to locate credentials

When I simply run the following code, I always gets this error.
import boto3 as boto
import sys
import json
role_to_assume_arn="arn:aws:iam::xxxxxxxxxxxx:role/AWSxxxx_xxxxxxAdminaccess_xxxxx24fexxx"
role_session_name='AssumeRoleSession1'
sts_client=boto.client('sts')
assumed_role_object=sts_client.assume_role(
RoleArn="arn:aws:iam::xxxxxxxxxxxx:role/AWSxxxx_xxxxxxAdminaccess_xxxxx24fexxx",
RoleSessionName="Sess1",
)
creds=assumed_role_object['Credentials']
sts_assumed_role = boto3.client('sts',
aws_access_key_id=creds['AccessKeyId'],
aws_secret_access_key=creds['SecretAccessKey'],
aws_session_token=creds['SessionToken'],
)
rds_client = boto.client('rds',
aws_access_key_id=creds['AccessKeyId'],
aws_secret_access_key=creds['SecretAccessKey'],
aws_session_token=creds['SessionToken']
)
I don't want to set and change the temporary session keys frequently, instead I want them to be set directly through a code like I've just written.
Am I wrong? Is there a way to set the credentials like this directly in the program or not?
Or is it mandatory to give the credentials in the "~/.aws/credentials"
I assume you are running this code in your local machine.
The STS client you created is expecting access key and secret access key.
You have to either configure it using credentials file or you can directly hardcode your access key and secret access key like below(Not recommended).
client = boto3.client('sts', aws_access_key_id=key, aws_secret_access_key=sec_key, region_name=region_name)
https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/guide_credentials_profiles.html
If you are running this code in EC2 instance, install boto3 and do AWS Configure. Follow the below link.
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html

Attempting to use service client factory with no configured API client

I am trying to add isp(in skill purchase) to my Alexa Skill. The skill code is written in python and in the Launch request handler i have written the following code:
locale = handler_input.request_envelope.request.locale
monetization_service = handler_input.service_client_factory.get_monetization_service()
product_response = monetization_service.get_in_skill_products(locale)
if isinstance(product_response, InSkillProductsResponse):
in_skill_product_list = product_response.in_skill_products
self._logger.info(in_skill_product_list)
When I am running my lambda though I am getting the following error:
Attempting to use service client factory with no configured API client
Has anybody faced this issue let me know what is it am doing incorrectly?
While initializing the skillbuilder i was using
sb = SkillBuilder()
This SkillBuilder does not have APIClient configured. Instead changing it to
sb = StandardSkillBuilder()
works as it has ApiClient configured.

boto3 says table is non-existent but I can query it in the AWS CLI

I'm trying to get a table description using boto3 and python using the following snippet:
import boto3
dynamodb = boto3.client('dynamodb', region_name='us-east-2', endpoint_url="http://localhost:8000")
response = dynamodb.describe_table(
TableName='bars'
)
print(response)
but I keep getting this error:
botocore.errorfactory.ResourceNotFoundException: An error occurred (ResourceNotFoundException) when calling the DescribeTable operation: Cannot do operations on a non-existent table
But I'm able to query it in the AWS CLI when I run this command:
aws dynamodb describe-table --table-name bars
Is something borked with my boto3 setup? Obviously my aws config file is fine because I can access the table using the cli.
Remove the endpoint_url().
This is useful when connecting to DynamoDB Local but is not necessary when connecting to the 'live' Amazon DynamoDB service.
The AWS Command-Line Interface (CLI) will use the live service by default.

Categories

Resources