I need a script in Python to get all ACL for each files in a s3 bucket, to see if there are public o private files in that bucket. All files are images, and Marketing dept wanna know which files are Private.
Something like this
get_acl(object, bucket, ...)
But recursive for all 10.000 files in that bucket.
With the AWS CLI i cant get this work, any idea where i can find some examples?
Thanks
As you state, you need to list all of the objects in the bucket, and either check their ACL, or test to see if you can access the object without authentication.
If you want to check the ACLs, you can run through each object in turn and check:
BUCKET = "example-bucket"
import boto3
s3 = boto3.client('s3')
paginator = s3.get_paginator('list_objects_v2')
# List all of the objects
for page in paginator.paginate(Bucket=BUCKET):
for cur in page.get("Contents", []):
# Get the ACL for each object in turn
# Note: This example does not take into
# account any bucket-level permissions
acl = s3.get_object_acl(Bucket=BUCKET, Key=cur['Key'])
public_read = False
public_write = False
# Check each grant in the ACL
for grant in acl["Grants"]:
# See if the All Users group has been given a right, keep track of
# all possibilites in case there are multiple rules for some reason
if grant["Grantee"].get("URI", "") == "http://acs.amazonaws.com/groups/global/AllUsers":
if grant["Permission"] in {"READ", "FULL_CONTROL"}:
public_read = True
if grant["Permission"] in {"WRITE", "FULL_CONTROL"}:
public_read = True
# Write out the status for this object
if public_read and public_write:
status = "public_read_write"
elif public_read:
status = "public_read"
elif public_write:
status = "public_write"
else:
status = "private"
print(f"{cur['Key']},{status}")
When the objects in the bucket are public you should get a 200 code, but if they are private the code will be 403.
So what you could try first is to get the list of all the objects in your bucket:
aws2 s3api list-objects --bucket bucketnamehere
So in python you could iterate a request to each of the objects, example:
https://bucketname.s3.us-east-1.amazonaws.com/objectname
You can do the test with the Unix command line Curl
curl -I https://bucketname.s3.us-east-1.amazonaws.com/objectname
Related
1) How can i implement this from boto into boto3 code:
conn = boto.connect_ec2() # boto way
sgs = conn.get_all_security_groups() # boto way
for sg in sgs:
if len(sg.instances()) == 0:
print(sg.name, sg.id, len(sg.instances()))
The above code basically prints all Security Groups with no instances attached.
2) And this individual command which uses duct.sh() module :
command = 'aws ec2 describe-instances --filters "Name=instance.group-id,Values=' + sg.id + '\" --query \'Reservations[*].Instances[*].[InstanceId,Tags[?Key==`Name`] | [0].Value]\' --output json'
boto: get_all_security_groups()
boto3: security_group_iterator = ec2.security_groups.all()
However, boto has the .instances() method on boto.ec2.securitygroup.SecurityGroup, whereas boto3 does not have an equivalent method on ec2.SecurityGroup.
Therefore, it looks like you would have to call describe_instances(), passing the security group as a Filter:
response = client.describe_instances(
Filters=[{'Name':'instance.group-id','Values':['sg-abcd1234']}])
This will return a list of instances that use the given security group.
You could then count len(response['Reservations']) to find unused security groups. (Note: This is an easy way to find zero-length responses, but to count the actual instances would require adding up all Reservations.Instances.)
Using the boto3 library and python code below, I can iterate through S3 buckets and prefixes, printing out the prefix name and key name as follows:
import boto3
client = boto3.client('s3')
pfx_paginator = client.get_paginator('list_objects_v2')
pfx_iterator = pfx_paginator.paginate(Bucket='app_folders', Delimiter='/')
for prefix in pfx_iterator.search('CommonPrefixes'):
print(prefix['Prefix'])
key_paginator = client.get_paginator('list_objects_v2')
key_iterator = key_paginator.paginate(Bucket='app_folders', Prefix=prefix['Prefix'])
for key in key_iterator.search('Contents'):
print(key['Key'])
Inside the key loop, I can put in a counter to count the number of keys (files), but this is an expensive operation. Is there a way to make one call given a bucket name and a prefix and return the count of keys contained in that prefix (even if it is more than 1000)?
UPDATE: I found a post here that shows a way to do this with the AWS CLI as follows:
aws s3api list-objects --bucket BUCKETNAME --prefix "folder/subfolder/" --output json --query "[length(Contents[])]"
Is there a way to do something similar with the boto3 API?
You can do it using MaxKeys=1000 parameter.
For your case:
pfx_iterator = pfx_paginator.paginate(Bucket='app_folders', Delimiter='/', MaxKeys=1000)
In general:
response = client.list_objects_v2(
Bucket='string',
Delimiter='string',
EncodingType='url',
MaxKeys=123,
Prefix='string',
ContinuationToken='string',
FetchOwner=True|False,
StartAfter='string',
RequestPayer='requester'
)
It will be cheaper for you in 1000 times :)
Documentation here
Using aws cli it is easy to count :
aws s3 ls <folder url> --recursive --summarize | grep <comment>
e.g.,
aws s3 ls s3://abc/ --recursive --summarize | grep "Number of Objects"
I want to use AWS Spot instances to train Neural Networks. To prevent loss of the model when the spot instance is terminated, I plan to create a snapshot of the EBS volume, make a new volume and attach it to a reserved instance. How can I mount, or make the EBS volume available using python & boto3.
These are the steps used to make the volume available on Linux, but I want to automate the process so that I don't need to SSH into the instance every time. Here is the code I use to attach the volume -
import boto3
ec2 = boto3.resource('ec2')
spot = ec2.Instance('i-9a8f5082')
res = ec2.Instance('i-86e65a13')
snapshot = ec2.create_snapshot(VolumeId="vol-5315f7db", Description="testing spot instances")
volume = ec2.create_volume(SnapshotId=snapshot.id, AvailabilityZone='us-west-2a')
res.attach_volume(VolumeId="vol-5315f7db", Device='/dev/sdy')
snapshot.delete()
You need to run mount command on instance. 2 way for it. One is the sending command with a ssh connection like #mootmoot wrote. The other one is the sending command with AWS SSM service like #Mark B wrote. Here is the detailed SSM solution sample, you can ignore unnecessary parts for you:
Send bash command to instances using AWS SSM:
# Amazon EC2 Systems Manager requires
# 1. An IAM role for EC2 instances that will process commands. There should be a system manager role and the instance should use this role ! (Did it while creation instance)
# 2. And a separate role for users executing commands. Aws IAM user that has access and secret keys should have ssm permission. (i.e. AmazonSSMFullAccess)
# http://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-configuring-access-policies.html
def execute_commands_on_linux_instances(commands, instance_ids):
client = boto3.client('ssm', **conn_args) # Need your credentials here
all_ssm_enabled_instances, ssm_enabled_instances, not_worked_instances = [],[],[]
not_worked_instances = instance_ids.copy()
all_ssm_enabled_instances = list()
outputs = list({})
not_executed = list()
# Select only the Instances that have an active ssm agent.
if len(client.describe_instance_information()['InstanceInformationList']) > 0:
resp = client.describe_instance_information(MaxResults=20)['InstanceInformationList']
for ins in resp:
all_ssm_enabled_instances.append(ins['InstanceId'])
ssm_enabled_instances = list(set(all_ssm_enabled_instances).intersection(instance_ids))
not_worked_instances = list(set(instance_ids).difference(all_ssm_enabled_instances))
# Now, send the command !
resp = client.send_command(
DocumentName="AWS-RunShellScript",
Parameters={'commands': [commands]},
InstanceIds=ssm_enabled_instances,
)
# get the command id generated by the send_command
com_id = resp['Command']['CommandId']
# Wait until all the commands status are out of Pending and InProgress
list_comm = client.list_commands( CommandId=com_id)
while True:
list_comm = client.list_commands( CommandId=com_id)
if (list_comm['Commands'][0]['Status'] == 'Pending'or list_comm['Commands'][0]['Status'] == 'InProgress'):
continue
else:
# Commands on all Instances were executed
break
# Get the responses the instances gave to this command. (stdoutput and stderror)
# Althoug the command could arrive to instance, if it couldn't be executed by the instance (response -1) it will ignore.
for i in ssm_enabled_instances:
resp2 = client.get_command_invocation(CommandId=com_id, InstanceId=i)
if resp2['ResponseCode'] == -1:
not_executed.append(i)
else:
outputs.append({'ins_id': i, 'stdout': resp2['StandardOutputContent'],
'stderr': resp2['StandardErrorContent']})
# Remove the instance that couldn't execute the command ever, add it to not_worked_instances
ssm_enabled_instances = list(set(ssm_enabled_instances).difference(not_executed))
not_worked_instances.extend(not_executed)
return ssm_enabled_instances, not_worked_instances, outputs
else:
print("There is no any available instance that has a worked SSM service!")
return ssm_enabled_instances, not_worked_instances, outputs
Create Instances with required IAM Instance profile that has required role that has required policy. As a result of this instance creation, instances have running SSM agents:
def create_ec2_instance(node_type):
# define userdata to be run at instance launch
userdata = """#cloud-config
runcmd:
- cd /tmp
- sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm
"""
ec2_r = boto3.resource('ec2', **conn_args)
rolename = "amazonec2ssmrole"
i_pro_name = "ins_pro_for_ssm"
# Create an iam instance profile and add required role to this instance profile.
# Create a role and attach a policy to it if not exist.
# Instances will have this role to build ssm (ec2 systems manager) connection.
iam = boto3.resource('iam', **conn_args)
try:
response= iam.meta.client.get_instance_profile(InstanceProfileName=i_pro_name)
except:
iam.create_instance_profile(InstanceProfileName=i_pro_name)
try:
response = iam.meta.client.get_role(RoleName=rolename)
except:
iam.create_role(
AssumeRolePolicyDocument='{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"Service":["ec2.amazonaws.com"]},"Action":["sts:AssumeRole"]}]}',
RoleName=rolename)
role = iam.Role(rolename)
role.attach_policy(PolicyArn='arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM')
iam.meta.client.add_role_to_instance_profile(InstanceProfileName=i_pro_name, RoleName=rolename)
iam_ins_profile = {'Name': i_pro_name}
if node_type == "Medium":
instance = ec2_r.create_instances(
ImageId='ami-aa5ebdd2',
MinCount=1,
MaxCount=1,
UserData=userdata,
InstanceType='t2.medium',
KeyName=key_pair_name,
IamInstanceProfile=iam_ins_profile,
BlockDeviceMappings=[{"DeviceName": "/dev/xvda", "Ebs": {"VolumeSize": 20}}])
elif node_type == "Micro":
instance = ec2_r.create_instances(
ImageId='ami-aa5ebdd2',
MinCount=1,
MaxCount=1,
UserData=userdata,
InstanceType='t2.micro',
KeyName=key_pair_name,
IamInstanceProfile=iam_ins_profile,
BlockDeviceMappings=[{"DeviceName": "/dev/xvda", "Ebs": {"VolumeSize": 10}}])
else:
print("Node Type Error")
return -1
# Wait for the instance state, default --> one wait is 15 seconds, 40 attempts
print('Waiting for instance {0} to switch to running state'.format(instance[0].id))
waiter = ec2_r.meta.client.get_waiter('instance_running')
waiter.wait(InstanceIds=[instance[0].id])
instance[0].reload()
print('Instance is running, public IP: {0}'.format(instance[0].public_ip_address))
return instance[0].id
Don't forget giving ssm permission. (i.e. AmazonSSMFullAccess) to the Aws IAM user that has access and secret keys.
By the way, conn_args can be defined as follows:
conn_args = {
'aws_access_key_id': Your_Access_Key,
'aws_secret_access_key': Your_Secret_Key,
'region_name': 'us-west-2'
}
You have to perform those steps in the operating system. You can't perform those steps via the AWS API (Boto3). Your best bet is to script those steps and then kick off the script somehow via Boto3, possibly using the AWS SSM service.
What's wrong with sending and execute ssh script remotely? Assume you are using ubuntu , i.e.
ssh -i your.pem ubuntu#ec2_name_or_ip 'sudo bash -s' < mount_script.sh
If you attach tag to those resources, you can later use boto3 to inquired the resources by universal tag name, instead tied to the specific static id.
I'm going to write a Python program to check if a file is in certain folder of my Google Cloud Storage, the basic idea is to get the list of all objects in a folder, a file name list, then check if the file abc.txt is in the file name list.
Now the problem is, it looks Google only provide the one way to get obj list, which is uri.get_bucket(), see below code which is from https://developers.google.com/storage/docs/gspythonlibrary#listing-objects
uri = boto.storage_uri(DOGS_BUCKET, GOOGLE_STORAGE)
for obj in uri.get_bucket():
print '%s://%s/%s' % (uri.scheme, uri.bucket_name, obj.name)
print ' "%s"' % obj.get_contents_as_string()
The defect of uri.get_bucket() is, it looks it is getting all of the object first, this is what I don't want, I just need get the obj name list of particular folder(e.g gs//mybucket/abc/myfolder) , which should be much quickly.
Could someone help answer? Appreciate every answer!
Update: the below is true for the older "Google API Client Libraries" for Python, but if you're not using that client, prefer the newer "Google Cloud Client Library" for Python ( https://googleapis.dev/python/storage/latest/index.html ). For the newer library, the equivalent to the below code is:
from google.cloud import storage
client = storage.Client()
for blob in client.list_blobs('bucketname', prefix='abc/myfolder'):
print(str(blob))
Answer for older client follows.
You may find it easier to work with the JSON API, which has a full-featured Python client. It has a function for listing objects that takes a prefix parameter, which you could use to check for a certain directory and its children in this manner:
from apiclient import discovery
# Auth goes here if necessary. Create authorized http object...
client = discovery.build('storage', 'v1') # add http=whatever param if auth
request = client.objects().list(
bucket="mybucket",
prefix="abc/myfolder")
while request is not None:
response = request.execute()
print json.dumps(response, indent=2)
request = request.list_next(request, response)
Fuller documentation of the list call is here: https://developers.google.com/storage/docs/json_api/v1/objects/list
And the Google Python API client is documented here:
https://code.google.com/p/google-api-python-client/
This worked for me:
client = storage.Client()
BUCKET_NAME = 'DEMO_BUCKET'
bucket = client.get_bucket(BUCKET_NAME)
blobs = bucket.list_blobs()
for blob in blobs:
print(blob.name)
The list_blobs() method will return an iterator used to find blobs in the bucket.
Now you can iterate over blobs and access every object in the bucket. In this example I just print out the name of the object.
This documentation helped me alot:
https://googleapis.github.io/google-cloud-python/latest/storage/blobs.html
https://googleapis.github.io/google-cloud-python/latest/_modules/google/cloud/storage/client.html#Client.bucket
I hope I could help!
You might also want to look at gcloud-python and documentation.
from gcloud import storage
connection = storage.get_connection(project_name, email, private_key_path)
bucket = connection.get_bucket('my-bucket')
for key in bucket:
if key.name == 'abc.txt':
print 'Found it!'
break
However, you might be better off just checking if the file exists:
if 'abc.txt' in bucket:
print 'Found it!'
Install python package google-cloud-storage by pip or pycharm and use below code
from google.cloud import storage
client = storage.Client()
for blob in client.list_blobs(BUCKET_NAME, prefix=FOLDER_NAME):
print(str(blob))
I know this is an old question, but I stumbled over this because I was looking for the exact same answer. Answers from Brandon Yarbrough and Abhijit worked for me, but I wanted to get into more detail.
When you run this:
from google.cloud import storage
storage_client = storage.Client()
blobs = list(storage_client.list_blobs(bucket_name, prefix=PREFIX, fields="items(name)"))
You will get Blob objects, with just the name field of all files in the given bucket, like this:
[<Blob: BUCKET_NAME, PREFIX, None>,
<Blob: xml-BUCKET_NAME, [PREFIX]claim_757325.json, None>,
<Blob: xml-BUCKET_NAME, [PREFIX]claim_757390.json, None>,
...]
If you are like me and you want to 1) filter out the first item in the list because it does NOT represent a file - its just the prefix, 2) just get the name string value, and 3) remove the PREFIX from the file name, you can do something like this:
blob_names = [blob_name.name[len(PREFIX):] for blob_name in blobs if blob_name.name != folder_name]
Complete code to get just the string files names from a storage bucket:
from google.cloud import storage
storage_client = storage.Client()
blobs = list(storage_client.list_blobs(bucket_name, prefix=PREFIX, fields="items(name)"))
blob_names = [blob_name.name[len(PREFIX):] for blob_name in blobs if blob_name.name != folder_name]
print(f"blob_names = {blob_names}")
I have created a S3 bucket, uploaded a video, created a streaming distribution in CloudFront. Tested it with a static HTML player and it works. I have created a keypair through the account settings. I have the private key file sitting on my desktop at the moment. That's where I am.
My aim is to get to a point where my Django/Python site creates secure URLs and people can't access the videos unless they've come from one of my pages. The problem is I'm allergic to the way Amazon have laid things out and I'm just getting more and more confused.
I realise this isn't going to be the best question on StackOverflow but I'm certain I can't be the only fool out here that can't make heads or tails out of how to set up a secure CloudFront/S3 situation. I would really appreciate your help and am willing (once two days has passed) give a 500pt bounty to the best answer.
I have several questions that, once answered, should fit into one explanation of how to accomplish what I'm after:
In the documentation (there's an example in the next point) there's lots of XML lying around telling me I need to POST things to various places. Is there an online console for doing this? Or do I literally have to force this up via cURL (et al)?
How do I create a Origin Access Identity for CloudFront and bind it to my distribution? I've read this document but, per the first point, don't know what to do with it. How does my keypair fit into this?
Once that's done, how do I limit the S3 bucket to only allow people to download things through that identity? If this is another XML jobby rather than clicking around the web UI, please tell me where and how I'm supposed to get this into my account.
In Python, what's the easiest way of generating an expiring URL for a file. I have boto installed but I don't see how to get a file from a streaming distribution.
Are there are any applications or scripts that can take the difficulty of setting this garb up? I use Ubuntu (Linux) but I have XP in a VM if it's Windows-only. I've already looked at CloudBerry S3 Explorer Pro - but it makes about as much sense as the online UI.
You're right, it takes a lot of API work to get this set up. I hope they get it in the AWS Console soon!
UPDATE: I have submitted this code to boto - as of boto v2.1 (released 2011-10-27) this gets much easier. For boto < 2.1, use the instructions here. For boto 2.1 or greater, get the updated instructions on my blog: http://www.secretmike.com/2011/10/aws-cloudfront-secure-streaming.html Once boto v2.1 gets packaged by more distros I'll update the answer here.
To accomplish what you want you need to perform the following steps which I will detail below:
Create your s3 bucket and upload some objects (you've already done this)
Create a Cloudfront "Origin Access Identity" (basically an AWS account to allow cloudfront to access your s3 bucket)
Modify the ACLs on your objects so that only your Cloudfront Origin Access Identity is allowed to read them (this prevents people from bypassing Cloudfront and going direct to s3)
Create a cloudfront distribution with basic URLs and one which requires signed URLs
Test that you can download objects from basic cloudfront distribution but not from s3 or the signed cloudfront distribution
Create a key pair for signing URLs
Generate some URLs using Python
Test that the signed URLs work
1 - Create Bucket and upload object
The easiest way to do this is through the AWS Console but for completeness I'll show how using boto. Boto code is shown here:
import boto
#credentials stored in environment AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
s3 = boto.connect_s3()
#bucket name MUST follow dns guidelines
new_bucket_name = "stream.example.com"
bucket = s3.create_bucket(new_bucket_name)
object_name = "video.mp4"
key = bucket.new_key(object_name)
key.set_contents_from_filename(object_name)
2 - Create a Cloudfront "Origin Access Identity"
For now, this step can only be performed using the API. Boto code is here:
import boto
#credentials stored in environment AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
cf = boto.connect_cloudfront()
oai = cf.create_origin_access_identity(comment='New identity for secure videos')
#We need the following two values for later steps:
print("Origin Access Identity ID: %s" % oai.id)
print("Origin Access Identity S3CanonicalUserId: %s" % oai.s3_user_id)
3 - Modify the ACLs on your objects
Now that we've got our special S3 user account (the S3CanonicalUserId we created above) we need to give it access to our s3 objects. We can do this easily using the AWS Console by opening the object's (not the bucket's!) Permissions tab, click the "Add more permissions" button, and pasting the very long S3CanonicalUserId we got above into the "Grantee" field of a new. Make sure you give the new permission "Open/Download" rights.
You can also do this in code using the following boto script:
import boto
#credentials stored in environment AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
s3 = boto.connect_s3()
bucket_name = "stream.example.com"
bucket = s3.get_bucket(bucket_name)
object_name = "video.mp4"
key = bucket.get_key(object_name)
#Now add read permission to our new s3 account
s3_canonical_user_id = "<your S3CanonicalUserID from above>"
key.add_user_grant("READ", s3_canonical_user_id)
4 - Create a cloudfront distribution
Note that custom origins and private distributions are not fully supported in boto until version 2.0 which has not been formally released at time of writing. The code below pulls out some code from the boto 2.0 branch and hacks it together to get it going but it's not pretty. The 2.0 branch handles this much more elegantly - definitely use that if possible!
import boto
from boto.cloudfront.distribution import DistributionConfig
from boto.cloudfront.exception import CloudFrontServerError
import re
def get_domain_from_xml(xml):
results = re.findall("<DomainName>([^<]+)</DomainName>", xml)
return results[0]
#custom class to hack this until boto v2.0 is released
class HackedStreamingDistributionConfig(DistributionConfig):
def __init__(self, connection=None, origin='', enabled=False,
caller_reference='', cnames=None, comment='',
trusted_signers=None):
DistributionConfig.__init__(self, connection=connection,
origin=origin, enabled=enabled,
caller_reference=caller_reference,
cnames=cnames, comment=comment,
trusted_signers=trusted_signers)
#override the to_xml() function
def to_xml(self):
s = '<?xml version="1.0" encoding="UTF-8"?>\n'
s += '<StreamingDistributionConfig xmlns="http://cloudfront.amazonaws.com/doc/2010-07-15/">\n'
s += ' <S3Origin>\n'
s += ' <DNSName>%s</DNSName>\n' % self.origin
if self.origin_access_identity:
val = self.origin_access_identity
s += ' <OriginAccessIdentity>origin-access-identity/cloudfront/%s</OriginAccessIdentity>\n' % val
s += ' </S3Origin>\n'
s += ' <CallerReference>%s</CallerReference>\n' % self.caller_reference
for cname in self.cnames:
s += ' <CNAME>%s</CNAME>\n' % cname
if self.comment:
s += ' <Comment>%s</Comment>\n' % self.comment
s += ' <Enabled>'
if self.enabled:
s += 'true'
else:
s += 'false'
s += '</Enabled>\n'
if self.trusted_signers:
s += '<TrustedSigners>\n'
for signer in self.trusted_signers:
if signer == 'Self':
s += ' <Self/>\n'
else:
s += ' <AwsAccountNumber>%s</AwsAccountNumber>\n' % signer
s += '</TrustedSigners>\n'
if self.logging:
s += '<Logging>\n'
s += ' <Bucket>%s</Bucket>\n' % self.logging.bucket
s += ' <Prefix>%s</Prefix>\n' % self.logging.prefix
s += '</Logging>\n'
s += '</StreamingDistributionConfig>\n'
return s
def create(self):
response = self.connection.make_request('POST',
'/%s/%s' % ("2010-11-01", "streaming-distribution"),
{'Content-Type' : 'text/xml'},
data=self.to_xml())
body = response.read()
if response.status == 201:
return body
else:
raise CloudFrontServerError(response.status, response.reason, body)
cf = boto.connect_cloudfront()
s3_dns_name = "stream.example.com.s3.amazonaws.com"
comment = "example streaming distribution"
oai = "<OAI ID from step 2 above like E23KRHS6GDUF5L>"
#Create a distribution that does NOT need signed URLS
hsd = HackedStreamingDistributionConfig(connection=cf, origin=s3_dns_name, comment=comment, enabled=True)
hsd.origin_access_identity = oai
basic_dist = hsd.create()
print("Distribution with basic URLs: %s" % get_domain_from_xml(basic_dist))
#Create a distribution that DOES need signed URLS
hsd = HackedStreamingDistributionConfig(connection=cf, origin=s3_dns_name, comment=comment, enabled=True)
hsd.origin_access_identity = oai
#Add some required signers (Self means your own account)
hsd.trusted_signers = ['Self']
signed_dist = hsd.create()
print("Distribution with signed URLs: %s" % get_domain_from_xml(signed_dist))
5 - Test that you can download objects from cloudfront but not from s3
You should now be able to verify:
stream.example.com.s3.amazonaws.com/video.mp4 - should give AccessDenied
signed_distribution.cloudfront.net/video.mp4 - should give MissingKey (because the URL is not signed)
basic_distribution.cloudfront.net/video.mp4 - should work fine
The tests will have to be adjusted to work with your stream player, but the basic idea is that only the basic cloudfront url should work.
6 - Create a keypair for CloudFront
I think the only way to do this is through Amazon's web site. Go into your AWS "Account" page and click on the "Security Credentials" link. Click on the "Key Pairs" tab then click "Create a New Key Pair". This will generate a new key pair for you and automatically download a private key file (pk-xxxxxxxxx.pem). Keep the key file safe and private. Also note down the "Key Pair ID" from amazon as we will need it in the next step.
7 - Generate some URLs in Python
As of boto version 2.0 there does not seem to be any support for generating signed CloudFront URLs. Python does not include RSA encryption routines in the standard library so we will have to use an additional library. I've used M2Crypto in this example.
For a non-streaming distribution, you must use the full cloudfront URL as the resource, however for streaming we only use the object name of the video file. See the code below for a full example of generating a URL which only lasts for 5 minutes.
This code is based loosely on the PHP example code provided by Amazon in the CloudFront documentation.
from M2Crypto import EVP
import base64
import time
def aws_url_base64_encode(msg):
msg_base64 = base64.b64encode(msg)
msg_base64 = msg_base64.replace('+', '-')
msg_base64 = msg_base64.replace('=', '_')
msg_base64 = msg_base64.replace('/', '~')
return msg_base64
def sign_string(message, priv_key_string):
key = EVP.load_key_string(priv_key_string)
key.reset_context(md='sha1')
key.sign_init()
key.sign_update(str(message))
signature = key.sign_final()
return signature
def create_url(url, encoded_signature, key_pair_id, expires):
signed_url = "%(url)s?Expires=%(expires)s&Signature=%(encoded_signature)s&Key-Pair-Id=%(key_pair_id)s" % {
'url':url,
'expires':expires,
'encoded_signature':encoded_signature,
'key_pair_id':key_pair_id,
}
return signed_url
def get_canned_policy_url(url, priv_key_string, key_pair_id, expires):
#we manually construct this policy string to ensure formatting matches signature
canned_policy = '{"Statement":[{"Resource":"%(url)s","Condition":{"DateLessThan":{"AWS:EpochTime":%(expires)s}}}]}' % {'url':url, 'expires':expires}
#now base64 encode it (must be URL safe)
encoded_policy = aws_url_base64_encode(canned_policy)
#sign the non-encoded policy
signature = sign_string(canned_policy, priv_key_string)
#now base64 encode the signature (URL safe as well)
encoded_signature = aws_url_base64_encode(signature)
#combine these into a full url
signed_url = create_url(url, encoded_signature, key_pair_id, expires);
return signed_url
def encode_query_param(resource):
enc = resource
enc = enc.replace('?', '%3F')
enc = enc.replace('=', '%3D')
enc = enc.replace('&', '%26')
return enc
#Set parameters for URL
key_pair_id = "APKAIAZCZRKVIO4BQ" #from the AWS accounts page
priv_key_file = "cloudfront-pk.pem" #your private keypair file
resource = 'video.mp4' #your resource (just object name for streaming videos)
expires = int(time.time()) + 300 #5 min
#Create the signed URL
priv_key_string = open(priv_key_file).read()
signed_url = get_canned_policy_url(resource, priv_key_string, key_pair_id, expires)
#Flash player doesn't like query params so encode them
enc_url = encode_query_param(signed_url)
print(enc_url)
8 - Try out the URLs
Hopefully you should now have a working URL which looks something like this:
video.mp4%3FExpires%3D1309979985%26Signature%3DMUNF7pw1689FhMeSN6JzQmWNVxcaIE9mk1x~KOudJky7anTuX0oAgL~1GW-ON6Zh5NFLBoocX3fUhmC9FusAHtJUzWyJVZLzYT9iLyoyfWMsm2ylCDBqpy5IynFbi8CUajd~CjYdxZBWpxTsPO3yIFNJI~R2AFpWx8qp3fs38Yw_%26Key-Pair-Id%3DAPKAIAZRKVIO4BQ
Put this into your js and you should have something which looks like this (from the PHP example in Amazon's CloudFront documentation):
var so_canned = new SWFObject('http://location.domname.com/~jvngkhow/player.swf','mpl','640','360','9');
so_canned.addParam('allowfullscreen','true');
so_canned.addParam('allowscriptaccess','always');
so_canned.addParam('wmode','opaque');
so_canned.addVariable('file','video.mp4%3FExpires%3D1309979985%26Signature%3DMUNF7pw1689FhMeSN6JzQmWNVxcaIE9mk1x~KOudJky7anTuX0oAgL~1GW-ON6Zh5NFLBoocX3fUhmC9FusAHtJUzWyJVZLzYT9iLyoyfWMsm2ylCDBqpy5IynFbi8CUajd~CjYdxZBWpxTsPO3yIFNJI~R2AFpWx8qp3fs38Yw_%26Key-Pair-Id%3DAPKAIAZRKVIO4BQ');
so_canned.addVariable('streamer','rtmp://s3nzpoyjpct.cloudfront.net/cfx/st');
so_canned.write('canned');
Summary
As you can see, not very easy! boto v2 will help a lot setting up the distribution. I will find out if it's possible to get some URL generation code in there as well to improve this great library!
In Python, what's the easiest way of generating an expiring URL for a file. I have boto installed but I don't see how to get a file from a streaming distribution.
You can generate a expiring signed-URL for the resource. Boto3 documentation has a nice example solution for that:
import datetime
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives import serialization
from cryptography.hazmat.primitives.asymmetric import padding
from botocore.signers import CloudFrontSigner
def rsa_signer(message):
with open('path/to/key.pem', 'rb') as key_file:
private_key = serialization.load_pem_private_key(
key_file.read(),
password=None,
backend=default_backend()
)
signer = private_key.signer(padding.PKCS1v15(), hashes.SHA1())
signer.update(message)
return signer.finalize()
key_id = 'AKIAIOSFODNN7EXAMPLE'
url = 'http://d2949o5mkkp72v.cloudfront.net/hello.txt'
expire_date = datetime.datetime(2017, 1, 1)
cloudfront_signer = CloudFrontSigner(key_id, rsa_signer)
# Create a signed url that will be valid until the specfic expiry date
# provided using a canned policy.
signed_url = cloudfront_signer.generate_presigned_url(
url, date_less_than=expire_date)
print(signed_url)