unable to check and create aws security group with boto3 - python

I'm trying to create a security groups and get the secutity group id as output using boto3. I want something like this:
If the security group exists, get/return/output the groud id.
If the security group doesn't exists create and authorize the group with the given rule and output the group id
This is my code so far:
ec2 = boto3.client('ec2', region_name='us-east-1')
for rds_security_group in ec2.describe_security_groups()['SecurityGroups']:
if rds_security_group['GroupName'] == 'testgroup':
print(rds_security_group['GroupId'])
return (rds_security_group['GroupId'])
else:
rds_security_group_name = ec2.create_security_group(
GroupName='testgroup',
Description='rds-security-group',
VpcId='vpc-12345')
client.authorize_security_group_ingress(
CidrIp=10.10.10.10/11,
IpProtocol='tcp',
FromPort=90,
ToPort=90,
GroupId=rds_security_group_name['GroupId'])
print(rds_security_group_name['GroupId'])
return(rds_security_group_name['GroupId'])
if security group doesn't exists code works perfectly by creating the group and returns the group id. but fails to return the group id if the security group already exists and throws up the existing error.
botocore.exceptions.ClientError: An error occurred (InvalidGroup.Duplicate) when calling the CreateSecurityGroup operation: The security group 'testgroup' already exists for VPC 'vpc-12345'
please help me on this ?

Your problem is that you are looping thru each security group and checking its group name. If the first security group is not called "testgroup" then you try to create it. Change your code to the following:
ec2 = boto3.client('ec2', region_name='us-east-1')
for rds_security_group in ec2.describe_security_groups()['SecurityGroups']:
if rds_security_group['GroupName'] == 'testgroup':
print(rds_security_group['GroupId'])
return (rds_security_group['GroupId'])
# Security Group was not found, create it
rds_security_group_name = ec2.create_security_group(
GroupName='testgroup',
Description='rds-security-group',
VpcId='vpc-12345')
client.authorize_security_group_ingress(
CidrIp=10.10.10.10/11,
IpProtocol='tcp',
FromPort=90,
ToPort=90,
GroupId=rds_security_group_name['GroupId'])
print(rds_security_group_name['GroupId'])
return(rds_security_group_name['GroupId'])

ec2 = boto3.client('ec2', region_name='us-east-1')
for rds_security_group in ec2.describe_security_groups()['SecurityGroups']:
if rds_security_group['GroupName'] == 'testgroup':
print(rds_security_group['GroupId'])
return (rds_security_group['GroupId'])
else:
rds_security_group_name = ec2.create_security_group(
GroupName='testgroup',
Description='rds-security-group',
VpcId='vpc-12345')
client.authorize_security_group_ingress(
CidrIp=10.10.10.10/11,
IpProtocol='tcp',
FromPort=90,
ToPort=90,
GroupId=rds_security_group_name['GroupId'])
print(rds_security_group_name['GroupId'])
return(rds_security_group_name['GroupId'])
I did find answer to my question with slight change in the existing code its self

Related

Check file permissions for each file on a S3 Bucket, recursive

I need a script in Python to get all ACL for each files in a s3 bucket, to see if there are public o private files in that bucket. All files are images, and Marketing dept wanna know which files are Private.
Something like this
get_acl(object, bucket, ...)
But recursive for all 10.000 files in that bucket.
With the AWS CLI i cant get this work, any idea where i can find some examples?
Thanks
As you state, you need to list all of the objects in the bucket, and either check their ACL, or test to see if you can access the object without authentication.
If you want to check the ACLs, you can run through each object in turn and check:
BUCKET = "example-bucket"
import boto3
s3 = boto3.client('s3')
paginator = s3.get_paginator('list_objects_v2')
# List all of the objects
for page in paginator.paginate(Bucket=BUCKET):
for cur in page.get("Contents", []):
# Get the ACL for each object in turn
# Note: This example does not take into
# account any bucket-level permissions
acl = s3.get_object_acl(Bucket=BUCKET, Key=cur['Key'])
public_read = False
public_write = False
# Check each grant in the ACL
for grant in acl["Grants"]:
# See if the All Users group has been given a right, keep track of
# all possibilites in case there are multiple rules for some reason
if grant["Grantee"].get("URI", "") == "http://acs.amazonaws.com/groups/global/AllUsers":
if grant["Permission"] in {"READ", "FULL_CONTROL"}:
public_read = True
if grant["Permission"] in {"WRITE", "FULL_CONTROL"}:
public_read = True
# Write out the status for this object
if public_read and public_write:
status = "public_read_write"
elif public_read:
status = "public_read"
elif public_write:
status = "public_write"
else:
status = "private"
print(f"{cur['Key']},{status}")
When the objects in the bucket are public you should get a 200 code, but if they are private the code will be 403.
So what you could try first is to get the list of all the objects in your bucket:
aws2 s3api list-objects --bucket bucketnamehere
So in python you could iterate a request to each of the objects, example:
https://bucketname.s3.us-east-1.amazonaws.com/objectname
You can do the test with the Unix command line Curl
curl -I https://bucketname.s3.us-east-1.amazonaws.com/objectname

Attaching IAM user to IAM Group using Boto3

I am trying to create an IAM policy, to an IAM group using Boto3. So far, I am not able to add IAM user to an IAM group. For "response = iam.add_user_to_group( ", I am getting an error, "Exception has occurred: AttributeError
'iam.ServiceResource' object has no attribute 'add_user_to_group'"
import boto3
iam = boto3.resource('iam') #using resource representing IAM
created_user = iam.create_user(
UserName='some_random_user'
)
print(created_user)
create_group_response = iam.create_group(GroupName = 'Tester')
response = iam.add_user_to_group(
UserName = 'some_random_user', #Name of user
GroupName = 'Tester'
)
response = iam.attach_user_policy(
UserName = 'some_random_user', #Name of user
PolicyArn = 'arn:aws:iam::196687784845:policy/boto-test'
# Policy ARN which you want to asign to user
)
Not sure what seems to be the problem, I am very new to python and boto so might be a very small thing.
According to the doc, add_user_to_group is an action for IAM client (not resource). Use this action on Group resource instead.

Boto3: How can I set Security Group Ids to default?

I am trying to set security group ids while creating an EC2 instance. If I have a specific security group Id list, I can do this:
boto3.resource(resource, region_name=self.region)
ec2 = self.resource
instances = ec2.create_instances(
ImageId=image_id,
MinCount=minCount,
MaxCount=maxCount,
InstanceType=instance_type,
SubnetId=subnet_id,
KeyName=key_pair,
SecurityGroupIds=security_groups
)
If I don't have the security group ids I'd like to use a default security group ids that get associated to launch-wizard-# security group if I don't pass SecurityGroupIds argument in ec2.create_instances. Is there a way to pass something to SecurityGroupIds argument that tells boto3 to use default security group Id? More specifically:
instances = ec2.create_instances(
ImageId=image_id,
MinCount=minCount,
MaxCount=maxCount,
InstanceType=instance_type,
SubnetId=subnet_id,
KeyName=key_pair,
SecurityGroupIds= <"What do I pass here so that boto3 takes in defaults?">
)
So that I get the same behavior as below (using the previous code snippet):
instances = ec2.create_instances(
ImageId=image_id,
MinCount=minCount,
MaxCount=maxCount,
InstanceType=instance_type,
SubnetId=subnet_id,
KeyName=key_pair,
)

How to get only KMS Customer managed keys alone from AWS Account using python boto3?

I am trying to get only the count of Customer managed keys from AWS Account.
I am getting the double count of what is there in AWS Account.
It is getting AWS managed keys as well.
So i want to know how to get the count of only Customer managed keys from KMS.
Python code:
for region in ec2_regions:
conn = boto3.client('kms', aws_access_key_id=access_key,
aws_secret_access_key=secret_key,region_name=region)
resp = conn.list_keys()
for keys in resp:
kms_count.append(keys)
print("kms" + str(len(kms_count)))
You can use the list_aliases method to do so.
For AWS managed keys, the alias has 'alias/aws/' as a prefix string.
You can apply your logic over the same to get only the Customer Managed keys from KMS.
I've updated your code snippet as per the requirement:
kms_count=[]
for region in ec2_regions:
conn = boto3.client('kms', aws_access_key_id=access_key, aws_secret_access_key=secret_key,region_name=region)
resp = conn.list_aliases()['Aliases']
for keys in resp:
if 'alias/aws/' not in keys['AliasName']:
kms_count.append(keys)
print("kms count: " + str(len(kms_count)))
Hope this helps!

Boto to Boto3 function implementation

1) How can i implement this from boto into boto3 code:
conn = boto.connect_ec2() # boto way
sgs = conn.get_all_security_groups() # boto way
for sg in sgs:
if len(sg.instances()) == 0:
print(sg.name, sg.id, len(sg.instances()))
The above code basically prints all Security Groups with no instances attached.
2) And this individual command which uses duct.sh() module :
command = 'aws ec2 describe-instances --filters "Name=instance.group-id,Values=' + sg.id + '\" --query \'Reservations[*].Instances[*].[InstanceId,Tags[?Key==`Name`] | [0].Value]\' --output json'
boto: get_all_security_groups()
boto3: security_group_iterator = ec2.security_groups.all()
However, boto has the .instances() method on boto.ec2.securitygroup.SecurityGroup, whereas boto3 does not have an equivalent method on ec2.SecurityGroup.
Therefore, it looks like you would have to call describe_instances(), passing the security group as a Filter:
response = client.describe_instances(
Filters=[{'Name':'instance.group-id','Values':['sg-abcd1234']}])
This will return a list of instances that use the given security group.
You could then count len(response['Reservations']) to find unused security groups. (Note: This is an easy way to find zero-length responses, but to count the actual instances would require adding up all Reservations.Instances.)

Categories

Resources