Boto3: How can I set Security Group Ids to default? - python

I am trying to set security group ids while creating an EC2 instance. If I have a specific security group Id list, I can do this:
boto3.resource(resource, region_name=self.region)
ec2 = self.resource
instances = ec2.create_instances(
ImageId=image_id,
MinCount=minCount,
MaxCount=maxCount,
InstanceType=instance_type,
SubnetId=subnet_id,
KeyName=key_pair,
SecurityGroupIds=security_groups
)
If I don't have the security group ids I'd like to use a default security group ids that get associated to launch-wizard-# security group if I don't pass SecurityGroupIds argument in ec2.create_instances. Is there a way to pass something to SecurityGroupIds argument that tells boto3 to use default security group Id? More specifically:
instances = ec2.create_instances(
ImageId=image_id,
MinCount=minCount,
MaxCount=maxCount,
InstanceType=instance_type,
SubnetId=subnet_id,
KeyName=key_pair,
SecurityGroupIds= <"What do I pass here so that boto3 takes in defaults?">
)
So that I get the same behavior as below (using the previous code snippet):
instances = ec2.create_instances(
ImageId=image_id,
MinCount=minCount,
MaxCount=maxCount,
InstanceType=instance_type,
SubnetId=subnet_id,
KeyName=key_pair,
)

Related

Unable to get the ARN for AWS SQS and its dead letter queue

I am creating a sqs with a dead letter queue. The creation of the queue is success but I need to store the ARN value of the queue in systems manager at parameter store. Is there any way to pass the value to parameter store? Please provide inputs.
I tried like this, but it is not deploying in the parameter store.. I am deploying through cdk using python.
dead_letter_queue: sqs.Queue = sqs.Queue(
self,
id="DeadLetterQueue",
queue_name="dlqueue",
retention_period=Duration.days(14)
)
self.sqs_queue: sqs.Queue = sqs.Queue(
self,
id="SqsQueue",
queue_name="queue",
retention_period=Duration.days(4),
visibility_timeout=Duration.hours(6),
delivery_delay=Duration.seconds(0),
receive_message_wait_time=Duration.seconds(0),
max_message_size_bytes=262144, # 256 KiB
#encryption=sqs.QueueEncryption.SQS_MANAGED,
dead_letter_queue=sqs.DeadLetterQueue(
max_receive_count=1,
queue=dead_letter_queue
)
)
ssm.StringParameter(self, "arn value of queue",
parameter_name="test",
string_value=self.sqs_queue.queue_arn)

unable to check and create aws security group with boto3

I'm trying to create a security groups and get the secutity group id as output using boto3. I want something like this:
If the security group exists, get/return/output the groud id.
If the security group doesn't exists create and authorize the group with the given rule and output the group id
This is my code so far:
ec2 = boto3.client('ec2', region_name='us-east-1')
for rds_security_group in ec2.describe_security_groups()['SecurityGroups']:
if rds_security_group['GroupName'] == 'testgroup':
print(rds_security_group['GroupId'])
return (rds_security_group['GroupId'])
else:
rds_security_group_name = ec2.create_security_group(
GroupName='testgroup',
Description='rds-security-group',
VpcId='vpc-12345')
client.authorize_security_group_ingress(
CidrIp=10.10.10.10/11,
IpProtocol='tcp',
FromPort=90,
ToPort=90,
GroupId=rds_security_group_name['GroupId'])
print(rds_security_group_name['GroupId'])
return(rds_security_group_name['GroupId'])
if security group doesn't exists code works perfectly by creating the group and returns the group id. but fails to return the group id if the security group already exists and throws up the existing error.
botocore.exceptions.ClientError: An error occurred (InvalidGroup.Duplicate) when calling the CreateSecurityGroup operation: The security group 'testgroup' already exists for VPC 'vpc-12345'
please help me on this ?
Your problem is that you are looping thru each security group and checking its group name. If the first security group is not called "testgroup" then you try to create it. Change your code to the following:
ec2 = boto3.client('ec2', region_name='us-east-1')
for rds_security_group in ec2.describe_security_groups()['SecurityGroups']:
if rds_security_group['GroupName'] == 'testgroup':
print(rds_security_group['GroupId'])
return (rds_security_group['GroupId'])
# Security Group was not found, create it
rds_security_group_name = ec2.create_security_group(
GroupName='testgroup',
Description='rds-security-group',
VpcId='vpc-12345')
client.authorize_security_group_ingress(
CidrIp=10.10.10.10/11,
IpProtocol='tcp',
FromPort=90,
ToPort=90,
GroupId=rds_security_group_name['GroupId'])
print(rds_security_group_name['GroupId'])
return(rds_security_group_name['GroupId'])
ec2 = boto3.client('ec2', region_name='us-east-1')
for rds_security_group in ec2.describe_security_groups()['SecurityGroups']:
if rds_security_group['GroupName'] == 'testgroup':
print(rds_security_group['GroupId'])
return (rds_security_group['GroupId'])
else:
rds_security_group_name = ec2.create_security_group(
GroupName='testgroup',
Description='rds-security-group',
VpcId='vpc-12345')
client.authorize_security_group_ingress(
CidrIp=10.10.10.10/11,
IpProtocol='tcp',
FromPort=90,
ToPort=90,
GroupId=rds_security_group_name['GroupId'])
print(rds_security_group_name['GroupId'])
return(rds_security_group_name['GroupId'])
I did find answer to my question with slight change in the existing code its self

Boto to Boto3 function implementation

1) How can i implement this from boto into boto3 code:
conn = boto.connect_ec2() # boto way
sgs = conn.get_all_security_groups() # boto way
for sg in sgs:
if len(sg.instances()) == 0:
print(sg.name, sg.id, len(sg.instances()))
The above code basically prints all Security Groups with no instances attached.
2) And this individual command which uses duct.sh() module :
command = 'aws ec2 describe-instances --filters "Name=instance.group-id,Values=' + sg.id + '\" --query \'Reservations[*].Instances[*].[InstanceId,Tags[?Key==`Name`] | [0].Value]\' --output json'
boto: get_all_security_groups()
boto3: security_group_iterator = ec2.security_groups.all()
However, boto has the .instances() method on boto.ec2.securitygroup.SecurityGroup, whereas boto3 does not have an equivalent method on ec2.SecurityGroup.
Therefore, it looks like you would have to call describe_instances(), passing the security group as a Filter:
response = client.describe_instances(
Filters=[{'Name':'instance.group-id','Values':['sg-abcd1234']}])
This will return a list of instances that use the given security group.
You could then count len(response['Reservations']) to find unused security groups. (Note: This is an easy way to find zero-length responses, but to count the actual instances would require adding up all Reservations.Instances.)

How to check if an IAM access key has specific permissions?

Is it possible to check if a particular AWS IAM key has permissions for a set of specific commands?
Essentially, is there an API for AWS's privacy simulator?
So far I've been using hacks, such as executing a command with incorrect parameters that utilizes the permission in question, and watching what response I get back.
Example:
# needed resource: 'elasticloadbalancer:SetLoadBalancerListenerSSLCertificate'
# Check:
try:
elb.set_listener_SSL_certificate(443, 'fake')
except BotoServerError as e:
if e.error_code == 'AccessDenied':
print ("You don't have access to "
"elasticloadbalancer:SetLoadBalancerListenerSSLCertificate")
This is obviously hacky. Ideally I'd have some function call like iam.check_against(resource) or something. Any suggestions?
See boto3's simulate_principal_policy.
I've made this function to test for permissions (you'll need to modify it slightly, as it's not completely self-contained):
from typing import Dict, List, Optional
def blocked(
actions: List[str],
resources: Optional[List[str]] = None,
context: Optional[Dict[str, List]] = None
) -> List[str]:
"""test whether IAM user is able to use specified AWS action(s)
Args:
actions (list): AWS action(s) to validate IAM user can use.
resources (list): Check if action(s) can be used on resource(s).
If None, action(s) must be usable on all resources ("*").
context (dict): Check if action(s) can be used with context(s).
If None, it is expected that no context restrictions were set.
Returns:
list: Actions denied by IAM due to insufficient permissions.
"""
if not actions:
return []
actions = list(set(actions))
if resources is None:
resources = ["*"]
_context: List[Dict] = [{}]
if context is not None:
# Convert context dict to list[dict] expected by ContextEntries.
_context = [{
'ContextKeyName': context_key,
'ContextKeyValues': [str(val) for val in context_values],
'ContextKeyType': "string"
} for context_key, context_values in context.items()]
# You'll need to create an IAM client here
results = aws.iam_client().simulate_principal_policy(
PolicySourceArn=consts.IAM_ARN, # Your IAM user's ARN goes here
ActionNames=actions,
ResourceArns=resources,
ContextEntries=_context
)['EvaluationResults']
return sorted([result['EvalActionName'] for result in results
if result['EvalDecision'] != "allowed"])
You need to pass the permission's original action names to actions, like so:
blocked_actions = verify_perms.blocked(actions=[
"iam:ListUsers",
"iam:ListAccessKeys",
"iam:DeleteAccessKey",
"iam:ListGroupsForUser",
"iam:RemoveUserFromGroup",
"iam:DeleteUser"
])
Here's an example that uses the resources and context arguments as well:
def validate_type_and_size_allowed(instance_type, volume_size):
"""validate user is allowed to create instance with type and size"""
if validate_perms.blocked(actions=["ec2:RunInstances"],
resources=["arn:aws:ec2:*:*:instance/*"],
context={'ec2:InstanceType': [instance_type]}):
halt.err(f"Instance type {instance_type} not permitted.")
if validate_perms.blocked(actions=["ec2:RunInstances"],
resources=["arn:aws:ec2:*:*:volume/*"],
context={'ec2:VolumeSize': [volume_size]}):
halt.err(f"Volume size {volume_size}GiB is too large.")
The IAM Policy Simulator provides an excellent UI for determining which users have access to particular API calls.
If you wish to test this programmatically, use the DryRun parameter to make an API call. The function will not actually execute, but you will be informed whether it has sufficient permissions to execute. It will not, however, check whether the call itself would have succeeded (eg having an incorrect certificate name).

How to auto assign public ip to EC2 instance with boto

I have to start a new machine with ec2.run_instances in a given subnet but also to have a public ip auto assigned (not fixed elastic ip).
When one starts a new machine from the Amazon's web EC2 Manager via the Request Instance (Instance details) there is a check-box called Assign Public IP to Auto-assign Public IP.
See it highlighted in the screenshot:
How can I achieve that check-box functionality with boto?
Interestingly enough, seems that not many people had this problem. For me was very important to be able to do this right. Without this functionality one is not able to reach out to the internet from instances that are launched into a nondefault subnet.
The boto documentation provided no help, there was a related bug recently fixed, see at: https://github.com/boto/boto/pull/1705.
It's important to note that the subnet_id and the security groups have to be provided to the network interface NetworkInterfaceSpecification instead of run_instance.
import time
import boto
import boto.ec2.networkinterface
from settings.settings import AWS_ACCESS_GENERIC
ec2 = boto.connect_ec2(*AWS_ACCESS_GENERIC)
interface = boto.ec2.networkinterface.NetworkInterfaceSpecification(subnet_id='subnet-11d02d71',
groups=['sg-0365c56d'],
associate_public_ip_address=True)
interfaces = boto.ec2.networkinterface.NetworkInterfaceCollection(interface)
reservation = ec2.run_instances(image_id='ami-a1074dc8',
instance_type='t1.micro',
#the following two arguments are provided in the network_interface
#instead at the global level !!
#'security_group_ids': ['sg-0365c56d'],
#'subnet_id': 'subnet-11d02d71',
network_interfaces=interfaces,
key_name='keyPairName')
instance = reservation.instances[0]
instance.update()
while instance.state == "pending":
print instance, instance.state
time.sleep(5)
instance.update()
instance.add_tag("Name", "some name")
print "done", instance
boto3 has NetworkInterfaces you can configure for DeviceIndex=0, and the Subnet and SecurityGroupIds should be moved from instance level to this block instead. Here's a working version for me,
def launch_instance(ami_id, name, type, size, ec2):
rc = ec2.create_instances(
ImageId=ami_id,
MinCount=1,
MaxCount=1,
KeyName=key_name,
InstanceType=size,
NetworkInterfaces=[
{
'DeviceIndex': 0,
'SubnetId': subnet,
'AssociatePublicIpAddress': True,
'Groups': sg
},
]
)
instance_id = rc[0].id
instance_name = name + '-' + type
ec2.create_tags(
Resources = [instance_id],
Tags = [{'Key': 'Name', 'Value': instance_name}]
)
return (instance_id, instance_name)
Never worked with this feature myself, but the run_instances call has a parameter called network_interfaces. According to the documentation you can give IP address detailes there.

Categories

Resources