How to use boto3 inside an EC2 instance - python

I have a python app running in a Docker container on a EC2 instance managed by ECS (well, that's what I would like...). However, to use services like SSM with boto3, I need to know the region where the instance is running. I dont need any credentials as I use a role for the instance which grants access to the service, so a default Session is ok.
I know that it is possible to fetch the region with a curl to get the dynamic metadata, but is there any more elegant way to instantiate a client with a region name (of credentials) inside an EC2 instance ?
I ran through the boto3 documentation and found
Note that if you've launched an EC2 instance with an IAM role configured, there's no explicit configuration you need to set in boto3 to use these credentials. Boto3 will automatically use IAM role credentials if it does not find credentials in any of the other places listed above.
So why do I need to pass the region name for SSM client for example ? Is there a workaround ?

Region is a required parameter for the SSM client to know which region it should be interacting with. It does not try to assume even if you’re in the AWS cloud.
If you want it to assume in your container the simplest way in which to implement is to use the AWS environment variables.
In your container definition specify the environment attribute specify a variable with name AWS_DEFAULT_REGION and the value of your current region.
By doing this you will not have to specify a region in the SDK within the container.
This example uses the environment attribute for more information.

Here is how to retrieve a parameter from the Parameter Store using the instance profile credentials:
#!/usr/bin/env python3
from ec2_metadata import ec2_metadata
import boto3
session = boto3.Session(region_name=ec2_metadata.region)
ssm = session.client('ssm')
parameter = ssm.get_parameter(Name='/path/to/a/parameter', WithDecryption=True)
print(parameter['Parameter']['Value'])
Replace the client section with the service of your choice and you should be set.

Related

Hardcoded Credentials for Dynamodb AWS Queries

I have a script which checks if a specific value is inside a cell in a dynamodb table in AWS. I used to add hardcoded credentials containing the secret key in my script such as this:
dynamodb_session = Session(aws_access_key_id='access_key_id',
aws_secret_access_key='secret_access_key',
region_name='region')
dynamodb = dynamodb_session.resource('dynamodb')
table=dynamodb.Table('table_name')
Are there any other ways to use those credentials without adding them to my script ? Thank you.
If you are running that code on an Amazon EC2 instance, then you simply need to assign an IAM Role to the instance and it will automatically receive credentials.
If you are running that code on your own computer, then use the AWS Command-Line Interface (CLI) aws configure command to store the credentials in a local configuration file. (It will be stored in ~/.aws/credentials).
Then, in both cases, you can simply use:
dynamodb = boto3.resource('dynamodb')
You can set the default region in that configuration too.

How to store secrets for a Python Flask App on EC2

I have a simple Flask App that uses Stripe running on an EC2 instance.
I followed this guide to get it running: https://medium.com/techfront/step-by-step-visual-guide-on-deploying-a-flask-application-on-aws-ec2-8e3e8b82c4f7
I export the keys as environment variables and then in the code read them.
stripe_keys = {
"secret_key": os.environ["STRIPE_SECRET_KEY"],
"publishable_key": os.environ["STRIPE_PUBLISHABLE_KEY"],
"webhook_secret": os.environ["STRIPE_WEBHOOK_KEY"],
}
However, this requires me to SSH into the EC2 machines to set the variables. Is there a better approach?
I'd recommend AWS System Manager - Parameter Store
maintain your keys in SSM Parameter Store, choose SecureString type so your keys are encrypted at rest
give your EC2 instance IAM role enough permissions to fetch and decrypt your SecureString stored in SSM Parameter Store
make sure your EC2 instance can reach the Internet, as SSM Parameter Store is an Internet-facing service
in your code, use AWS SDK to fetch and decrypt your SecureString stored in SSM Parameter Store
I reckon you're writing in Python, so https://nqbao.medium.com/how-to-use-aws-ssm-parameter-store-easily-in-python-94fda04fea84
PS: if you use CloudFormation or other Infra-as-Code tools to provision your EC2 instances, most IaC tools support injecting SSM Parameter Store as env vars during deployment. With this approach, your code can stay as is, your EC2 instance doesn't need extra permission.
As Chris Chen pointed out, you can use AWS Parameter Store and on top of it: AWStanding
Suppose you stored your variables like this in Parameter store:
"/stripe/secret_key"
"/stripe/publishable_key"
"/stripe/webhook_secret"
Then you can write code like this:
from awstanding.parameter_store import load_path
load_path('/stripe')
# Now you can access you variables exactly like this:
os.environ["STRIPE_SECRET_KEY"]
os.environ["STRIPE_PUBLISHABLE_KEY"]
os.environ["STRIPE_WEBHOOK_SECRET"]
# or store them in settings variables:
STRIPE_SECRET_KEY = os.environ["STRIPE_SECRET_KEY"]
Also, it handles automatically any encrypted key.

Using Boto3 to get AWS configuration option

I am looking for a way to perform the equivalent of the AWS CLI's method aws configure get varname [--profile profile-name] using boto3 in python. Does anyone know if this possible without either:
Parsing the AWS config file myself
Somehow interacting with the AWS CLI itself from my python script
For more context, I am writing a python cli tool that will interact with AWS APIs using boto3. The python tool uses an AWS session token stored in a profile in the ~/.aws/credentials file. I am using the saml2aws cli to fetch AWS credentials from my company's identity provider, which writes the aws_access_key_id, aws_secret_access_key, aws_session_token, aws_security_token, x_principal_arn, and x_security_token_expires parameters to the ~/.aws/credentials file like so:
[saml]
aws_access_key_id = #REMOVED#
aws_secret_access_key = #REMOVED#
aws_session_token = #REMOVED#
aws_security_token = #REMOVED#
x_principal_arn = arn:aws:sts::000000000123:assumed-role/MyAssumedRole
x_security_token_expires = 2019-08-19T15:00:56-06:00
By the nature of my python cli tool, sometimes the tool will execute past the expiration time of the AWS session token, which are enforced to be quite short by my company. I want the python cli tool to check the expiration time before it starts its critical task to verify that it has enough time to complete its task, and if not, alerting the user to refresh their session token.
Using the AWS CLI, I can fetch the expiration time of the AWS session token from the ~/.aws/credentials file using like this:
$ aws configure get x_security_token_expires --profile saml
2019-08-19T15:00:56-06:00
and I am curious if boto3 has a mechanism I was unable to find to do something similar.
As an alternate solution, given an already generated AWS session token, is it possible to fetch the expiration time of it? However, given the lack of answers on questions such as Ways to find out how soon the AWS session expires?, I would guess not.
Since the official AWS CLI is powered by boto3, I was able to dig into the source to find out how aws configure get is implemented. It's possible to read the profile configuration through the botocore Session object. Here is some code to get the config profile and value used in your example:
import botocore.session
# Create an empty botocore session directly
session = botocore.session.Session()
# Get config of desired profile. full_config is a standard python dictionary.
profiles_config = session.full_config.get("profiles", {})
saml_config = profiles_config.get("saml", {})
# Get config value. This will be None if the setting doesn't exist.
saml_security_token_expires = saml_config.get("x_security_token_expires")
I'm using code similar to the above as part of a transparent session cache. It checks for a profile's role_arn so I can identify a cached session to load if one exists and hasn't expired.
As far as the alternate question of knowing how long a given session has before expiring, you are correct in that there is currently no API call that can tell you this. Session expiration is only given when the session is created, either through STS get_session_token or assume_role API calls. You have to hold onto the expiration info yourself after that.

How to correctly/safely access parameters from AWS SSM Parameter store for my Python script on EC2 instance?

I have a Python script that I want to run and text me a notification if a certain condition is met. I'm using Twilio, so I have a Twilio API token and I want to keep it secret. I have it successfully running locally, and now I'm working on getting it running on an EC2 instance.
Regarding AWS steps, I've created an IAM user with permissions, launched the EC2 instance (and saved the ssh keys), and created some parameters in the AWS SSM Parameter store. Then I ssh'd into the instance and installed boto3. When I try to use boto3 to grab a parameter, I'm unable to locate the credentials:
# test.py
import boto3
ssm = boto3.client('ssm', region_name='us-west-1')
secret = ssm.get_parameter(Name='/test/cli-parameter')
print(secret)
# running the file in the console
>> python test.py
...
raise NoCredentialsError
botocore.exceptions.NoCredentialsError: Unable to locate credentials
I'm pretty sure this means it can't find the credentials that were created when I ran aws configure and it created the .aws/credentials file. I believe the reason for this is because I ran aws configure on my local machine, rather than running it while ssh'd into the instance. I did this to keep my AWS ID and secret key off of my EC2 instance, because I thought I'm supposed to keep that private and not put tokens/keys on my EC2 instance. I think I can solve the issue by running aws configure while ssh'd into my instance, but I want to understand what happens if there's a .aws/credentials file on my actual EC2 instance, and whether or not this is dangerous. I'm just not sure how this is all supposed to be structured, or what is a safe/correct way of running my script and accessing secret variables.
Any insight at all is helpful!
I suspect the answer you're looking for looks something like:
Create an IAM policy which allows access to the SSM parameter (why not use the SecretStore?)
Attach that IAM policy to a role.
Attach the role to your EC2 instance (instance profile)
boto3 will now automatically collect an AWS secret key, etc.. from the meta data service when it needs to talk to the parameter store.

test ansible roles with molecule and boto3

I have ansible roles that creates servers, S3 buckets, security groups ... and I want to establish some unit testing using Molecule.
After some researches, I found out that Molecule is using Testinfra to run some assert commands on the remote/local host. That can work for my roles that create some servers like apache2, nginx.. but how about the other roles that are just creating some other aws resources like load balancers, autoscaling groups, security groups, or just s3 buckets? in this case, there will be no host nor instances.
It would be easy to make tests by Unittest and boto3 and call the AWS API, but my question is can I use molecule only and fire up an EC2 instance everytime I want to test my role of security group and then do something like this :
def test_security_group_has_80_open(host):
cmd = host.run('aws ec2 describe-security-groups --group-names MySecurityGroup')
return_code = cmd.rc
output = cmd.stdout
assert output.contains('"ToPort": 80')
That EC2 instance would have AWSCLI installed. Is this a correct way ? Is it possible to test all type of roles by Molecule by firing an EC2 that runs awscli calls ?
I cannot comment or else I would, but to speed things up you can configure Molecule to not manage the create and destroy sequences. And use the delegated driver with the converge playbook having connection=local. This way you can simply just create the the security group using the role without provisioning instances and use boto3 to confirm your changes are correct.
This way you only need your test environment to have the proper keys available to make the API calls using boto instead of also worrying about whether or not the EC2 instance does as well.

Categories

Resources