test ansible roles with molecule and boto3 - python

I have ansible roles that creates servers, S3 buckets, security groups ... and I want to establish some unit testing using Molecule.
After some researches, I found out that Molecule is using Testinfra to run some assert commands on the remote/local host. That can work for my roles that create some servers like apache2, nginx.. but how about the other roles that are just creating some other aws resources like load balancers, autoscaling groups, security groups, or just s3 buckets? in this case, there will be no host nor instances.
It would be easy to make tests by Unittest and boto3 and call the AWS API, but my question is can I use molecule only and fire up an EC2 instance everytime I want to test my role of security group and then do something like this :
def test_security_group_has_80_open(host):
cmd = host.run('aws ec2 describe-security-groups --group-names MySecurityGroup')
return_code = cmd.rc
output = cmd.stdout
assert output.contains('"ToPort": 80')
That EC2 instance would have AWSCLI installed. Is this a correct way ? Is it possible to test all type of roles by Molecule by firing an EC2 that runs awscli calls ?

I cannot comment or else I would, but to speed things up you can configure Molecule to not manage the create and destroy sequences. And use the delegated driver with the converge playbook having connection=local. This way you can simply just create the the security group using the role without provisioning instances and use boto3 to confirm your changes are correct.
This way you only need your test environment to have the proper keys available to make the API calls using boto instead of also worrying about whether or not the EC2 instance does as well.

Related

How to correctly/safely access parameters from AWS SSM Parameter store for my Python script on EC2 instance?

I have a Python script that I want to run and text me a notification if a certain condition is met. I'm using Twilio, so I have a Twilio API token and I want to keep it secret. I have it successfully running locally, and now I'm working on getting it running on an EC2 instance.
Regarding AWS steps, I've created an IAM user with permissions, launched the EC2 instance (and saved the ssh keys), and created some parameters in the AWS SSM Parameter store. Then I ssh'd into the instance and installed boto3. When I try to use boto3 to grab a parameter, I'm unable to locate the credentials:
# test.py
import boto3
ssm = boto3.client('ssm', region_name='us-west-1')
secret = ssm.get_parameter(Name='/test/cli-parameter')
print(secret)
# running the file in the console
>> python test.py
...
raise NoCredentialsError
botocore.exceptions.NoCredentialsError: Unable to locate credentials
I'm pretty sure this means it can't find the credentials that were created when I ran aws configure and it created the .aws/credentials file. I believe the reason for this is because I ran aws configure on my local machine, rather than running it while ssh'd into the instance. I did this to keep my AWS ID and secret key off of my EC2 instance, because I thought I'm supposed to keep that private and not put tokens/keys on my EC2 instance. I think I can solve the issue by running aws configure while ssh'd into my instance, but I want to understand what happens if there's a .aws/credentials file on my actual EC2 instance, and whether or not this is dangerous. I'm just not sure how this is all supposed to be structured, or what is a safe/correct way of running my script and accessing secret variables.
Any insight at all is helpful!
I suspect the answer you're looking for looks something like:
Create an IAM policy which allows access to the SSM parameter (why not use the SecretStore?)
Attach that IAM policy to a role.
Attach the role to your EC2 instance (instance profile)
boto3 will now automatically collect an AWS secret key, etc.. from the meta data service when it needs to talk to the parameter store.

How to use boto3 inside an EC2 instance

I have a python app running in a Docker container on a EC2 instance managed by ECS (well, that's what I would like...). However, to use services like SSM with boto3, I need to know the region where the instance is running. I dont need any credentials as I use a role for the instance which grants access to the service, so a default Session is ok.
I know that it is possible to fetch the region with a curl to get the dynamic metadata, but is there any more elegant way to instantiate a client with a region name (of credentials) inside an EC2 instance ?
I ran through the boto3 documentation and found
Note that if you've launched an EC2 instance with an IAM role configured, there's no explicit configuration you need to set in boto3 to use these credentials. Boto3 will automatically use IAM role credentials if it does not find credentials in any of the other places listed above.
So why do I need to pass the region name for SSM client for example ? Is there a workaround ?
Region is a required parameter for the SSM client to know which region it should be interacting with. It does not try to assume even if you’re in the AWS cloud.
If you want it to assume in your container the simplest way in which to implement is to use the AWS environment variables.
In your container definition specify the environment attribute specify a variable with name AWS_DEFAULT_REGION and the value of your current region.
By doing this you will not have to specify a region in the SDK within the container.
This example uses the environment attribute for more information.
Here is how to retrieve a parameter from the Parameter Store using the instance profile credentials:
#!/usr/bin/env python3
from ec2_metadata import ec2_metadata
import boto3
session = boto3.Session(region_name=ec2_metadata.region)
ssm = session.client('ssm')
parameter = ssm.get_parameter(Name='/path/to/a/parameter', WithDecryption=True)
print(parameter['Parameter']['Value'])
Replace the client section with the service of your choice and you should be set.

How to instantiate an AWS Linux using python API?

1) Instantiate an AWS Linux, micro instance using the AWS python API (include authentication to AWS)
2) Update the instance with tags: customer=ACME, environment=PROD
3) Assign a security group to the instance
To program in Python on AWS, you should use the boto3 library.
You will need to do the following:
supply credentials to the library (link)
create an EC2 client (link)
use the EC2 client to launch EC2 instances using run_instances (link)
You can specify both tags and security groups in the run_instances call. Additionally, the boto3 documentation provides some Amazon EC2 examples that will help.
Maybe you want to observe this project:
https://github.com/nchammas/flintrock
This is a hadoop and apache spark clustering project. But, it can inspire you.
Actually, there is many feature that you want like security group or filtering by tag name. Just, look around of code

Discovering peer instances in Azure Virtual Machine Scale Set

Problem: Given N instances launched as part of VMSS, I would like my application code on each azure instance to discover the IP address of the other peer instances. How do I do this?
The overall intent is to cluster the instances so, as to provide active passive HA or keep the configuration in sync.
Seems like there is some support for REST API based querying : https://learn.microsoft.com/en-us/rest/api/virtualmachinescalesets/
Would like to know any other way to do it, i.e. either python SDK or instance meta data URL etc.
The RestAPI you mentioned has a Python SDK, the "azure-mgmt-compute" client
https://learn.microsoft.com/python/api/azure.mgmt.compute.compute.computemanagementclient
One way to do this would be to use instance metadata. Right now instance metadata only shows information about the VM it's running on, e.g.
curl -H Metadata:true "http://169.254.169.254/metadata/instance/compute?api-version=2017-03-01"
{"compute":
{"location":"westcentralus","name":"imdsvmss_0","offer":"UbuntuServer","osType":"Linux","platformFaultDomain":"0","platformUpdateDomain":"0",
"publisher":"Canonical","sku":"16.04-LTS","version":"16.04.201703300","vmId":"e850e4fa-0fcf-423b-9aed-6095228c0bfc","vmSize":"Standard_D1_V2"},
"network":{"interface":[{"ipv4":{"ipaddress":[{"ipaddress":"10.0.0.4","publicip":"52.161.25.104"}],"subnet":[{"address":"10.0.0.0","dnsservers":[],"prefix":"24"}]},
"ipv6":{"ipaddress":[]},"mac":"000D3AF8BECE"}]}}
You could do something like have each VM send the info to a listener on VM#0, or to an external service, or you could combine this with Azure Files, and have each VM output to a common share. There's an Azure template proof of concept here which outputs information from each VM to an Azure File share.. https://github.com/Azure/azure-quickstart-templates/tree/master/201-vmss-azure-files-linux - every VM has a mountpoint which contains info written by every VM.

How do I list Security Groups of current Instance in AWS EC2?

EDIT Removed BOTO from question title as it's not needed.
Is there a way to find the security groups of an EC2 instance using Python and possible Boto?
I can only find docs about creating or removing security groups, but I want to trace which security groups have been added to my current EC2 instance.
To list the security groups of current instance, you don't need Boto/Boto3. Make use of the AWS meta-data server.
import os
sgs = os.popen("curl -s http://169.254.169.254/latest/meta-data/security-groups").read()
print sgs
You can check it from that instance and execute below command
curl http://169.254.169.254/latest/meta-data/security-groups
or from aws-cli also
aws ec2 describe-security-groups

Categories

Resources