Boto3 not copying snapshot to other regions, other options? - python

[Very new to AWS]
Hi,
I am trying to move my EBS volume snapshot copies across regions. I have been trying to use Boto3 to move the snapshots. My objective is to move the latest snapshot from us-east-2 region to us-east-1 region automatically on a daily basis.
I have used aws configure command in terminal to setup my security credentials and set region to us-east-2.
I am using pandas to acquire the most recent snapshot-id using this code:
import boto3
import pandas as pd
from pandas.io.json.normalize import nested_to_record
import boto.ec2
client = boto3.client('ec2')
aws_api_response = client.describe_snapshots(OwnerIds=['self'])
flat = nested_to_record(aws_api_response)
df = pd.DataFrame.from_dict(flat)
df= df['Snapshots'].apply(pd.Series)
insert_snap = df.loc[df['StartTime'] == max(df['StartTime']),'SnapshotId']
insert_snap = insert_snap.reset_index(drop=True)
insert_snap returns a snapshot id something like snap-1234ABCD
I am try to use this code to move the snap shot from us-east-2 to us-east-1:
client.copy_snapshot(SourceSnapshotId='%s' %insert_snap[0],
SourceRegion='us-east-2',
DestinationRegion='us-east-1',
Description='This is my copied snapshot.')
The snapshot is copying in the same region using the above line.
I have also tried switching regions through aws configure command in terminal, with the same issue occurring where snapshot is being copied in the same region.
There is a bug in Boto3 that is skipping the destination parameter in the copy_snapshot() code. Information found here: https://github.com/boto/boto3/issues/886
I have tried inserting this code with into the lambda manager but keep getting error "errorMessage": "Unable to import module 'lambda_function'":
region = 'us-east-2'
ec = boto3.client('ec2',region_name=region)
def lambda_handler(event, context):
response=ec.copy_snapshot(SourceSnapshotId='snap-xxx',
SourceRegion=region,
DestinationRegion='us-east-1',
Description='copied from Ohio')
print (response)
I am out of options, what I can do to automate the transfer of snapshots in aws?

As per CopySnapshot - Amazon Elastic Compute Cloud:
CopySnapshot sends the snapshot copy to the regional endpoint that you send the HTTP request to, such as ec2.us-east-1.amazonaws.com (in the AWS CLI, this is specified with the --region parameter or the default region in your AWS configuration file).
Therefore, you should send the copy_snapshot() command to us-east-1, with the Source Region set to us-east-2.
If you wish to move the most recent snapshot, you could run:
import boto3
SOURCE_REGION = 'us-east-2'
DESTINATION_REGION = 'us-east-1'
# Connect to EC2 in Source region
source_client = boto3.client('ec2', region_name=SOURCE_REGION)
# Get a list of all snapshots, then sort them
snapshots = source_client.describe_snapshots(OwnerIds=['self'])
snapshots_sorted = sorted([(s['SnapshotId'], s['StartTime']) for s in snapshots['Snapshots']], key=lambda k: k[1])
latest_snapshot = snapshots_sorted[-1][0]
print ('Latest Snapshot ID is ' + latest_snapshot)
# Connect to EC2 in Destination region
destination_client = boto3.client('ec2', region_name=DESTINATION_REGION)
# Copy the snapshot
response = destination_client.copy_snapshot(
SourceSnapshotId=latest_snapshot,
SourceRegion=SOURCE_REGION,
Description='This is my copied snapshot'
)
print ('Copied Snapshot ID is ' + response['SnapshotId'])

Related

S3 boto3 refuses to overwrite endpoint URL

I'm working on an internal S3 service (not AWS one). When I provide hard coded credentials, region and endpoint_url, boto3 seems to ignore them.
I came to that conclusion because it is attempting to go on internet (by using a public aws endpoint URL instead of the internal I have provided) but it does not work because of the following proxy error. But he should not go on internet, since it is an internal S3 service :
botocore.exceptions.ProxyConnectionError: Failed to connect to proxy URL: "http://my_company_proxy"
Here is my code
import io
import os
import boto3
import pandas as pd
# Method 1 : Client #########################################
s3_client = boto3.client(
's3',
region_name='EU-WEST-1',
aws_access_key_id='xxx',
aws_secret_access_key='zzz',
endpoint_url='https://my_company_enpoint_url'
)
# ==> at this point no error, but I don't know the value of endpoint_url
# Read bucket
bucket = "bkt-udt-arch"
file_name = "banking.csv"
print("debug 1") # printed OK
obj = s3_client.get_object(Bucket= bucket, Key= file_name)
# program stops here :
botocore.exceptions.ProxyConnectionError: Failed to connect to proxy URL: "http://my_company_proxy"
print("debug 2") # not printed -
initial_df = pd.read_csv(obj['Body']) # 'Body' is a key word
print("debug 3")
# Method 2 : Resource #########################################
# use third party object storage
s3 = boto3.resource('s3', endpoint_url='https://my_company_enpoint_url',
aws_access_key_id='xxx',
aws_secret_access_key='zzz',
region_name='EU-WEST-1'
)
print("debug 4") # Printed OK if method 1 is commented
# Print out bucket names
for bucket in s3.buckets.all():
print(bucket.name)
Thank you for the review
It was indeed a proxy problem : when http_prxoxy env variable is disabled, it works fine.

Azure: create storage account with container and upload blob to it in Python

I'm trying to create a storage account in Azure and upload a blob into it using their python SDK.
I managed to create an account like this:
client = get_client_from_auth_file(StorageManagementClient)
storage_account = client.storage_accounts.create(
resourceGroup,
name,
StorageAccountCreateParameters(
sku=Sku(name=SkuName.standard_ragrs),
enable_https_traffic_only=True,
kind=Kind.storage,
location=region)).result()
The problem is that later I'm trying to build a container and I don't know what to insert as "account_url"
I have tried doing:
client = get_client_from_auth_file(BlobServiceClient, account_url=storage_account.primary_endpoints.blob)
return client.create_container(name)
But I'm getting:
azure.core.exceptions.ResourceNotFoundError: The specified resource does not exist
I did manage to create a container using:
client = get_client_from_auth_file(StorageManagementClient)
return client.blob_containers.create(
resourceGroup,
storage_account.name,
name,
BlobContainer(),
public_access=PublicAccess.Container
)
But later when I'm trying to upload a blob using BlobServiceClient or BlobClien I still need the "account_url" so I'm still getting an error:
azure.core.exceptions.ResourceNotFoundError: The specified resource does not exist
Anyone can help me to understand how do I get the account_url for a storage account I created with the SDK?
EDIT:
I managed to find a workaround to the problem by creating the connection string from the storage keys.
storage_client = get_client_from_auth_file(StorageManagementClient)
storage_keys = storage_client.storage_accounts.list_keys(resource_group, account_name)
storage_key = next(v.value for v in storage_keys.keys)
return BlobServiceClient.from_connection_string(
'DefaultEndpointsProtocol=https;' +
f'AccountName={account_name};' +
f'AccountKey={storage_key};' +
'EndpointSuffix=core.windows.net')
This works but I thin George Chen answer is more elegant.
I could reproduce this problem, then I found get_client_from_auth_file could not pass the credential to the BlobServiceClient, cause if just create BlobServiceClient with account_url without credential it also could print the account name.
So if you want to use a credential to get BlobServiceClient, you could use the below code, then do other operations.
credentials = ClientSecretCredential(
'tenant_id',
'application_id',
'application_secret'
)
blobserviceclient=BlobServiceClient(account_url=storage_account.primary_endpoints.blob,credential=credentials)
If you don't want this way, you could create the BlobServiceClient with the account key.
client = get_client_from_auth_file(StorageManagementClient,auth_path='auth')
storage_account = client.storage_accounts.create(
'group name',
'account name',
StorageAccountCreateParameters(
sku=Sku(name=SkuName.standard_ragrs),
enable_https_traffic_only=True,
kind=Kind.storage,
location='eastus',)).result()
storage_keys = client.storage_accounts.list_keys(resource_group_name='group name',account_name='account name')
storage_keys = {v.key_name: v.value for v in storage_keys.keys}
blobserviceclient=BlobServiceClient(account_url=storage_account.primary_endpoints.blob,credential=storage_keys['key1'])
blobserviceclient.create_container(name='container name')

Boto3: Empty Datapoint from AWS instance

I wanted to write a Python script which will fetch my EC2 CPU utilization. While using get_metric_statistics() method, the output response shows like this .
As far as I know the 'Datapoints:[]' array shouldn't be empty. It should return something to get the CPU load %. My code is
import boto3
import sys
import datetime
client = boto3.client('cloudwatch')
response = client.get_metric_statistics(
Namespace='AWS/EC2',
MetricName='CPUUtilization',
Dimensions=[
{
'Name': 'InstanceId',
'Value': 'i-***********'
},
],
StartTime=datetime.datetime.utcnow() - datetime.timedelta(seconds=600),
EndTime=datetime.datetime.utcnow(),
Period=120,
Statistics=[
'Average',
],
Unit='Percent'
)
print(response)
Any help on what is wrong here? Thanks!
To use percentile statistics you must first enable detailed monitoring.
Amazon EC2 Metrics and Dimensions
By default, it takes the default region, which is "ap-southeast-1" (it's in the .aws/config). In the .aws/credentials file, it automatically takes the secret key & access key that was used while configuring the api. If they conflict, the datapoint returns empty.
Solution:
1. Create a new profile for every region. configure the region in .aws/config. Mine looks like this
`[default]
region = ap-southeast-1
[profile nv]
region = us-east-1
[profile prod]
region = us-east-1
~ `
nv and prod is usermade profilenames.
Enter the name instead of default
session = boto3.Session(profile_name='default')
Enter the secret key & access key in the same way in .aws/credentials.
You can use aws configure --your_profile_name to create the profiles inside the config & credential files

Docker Python API - Tagging Containers

I'm using Docker with AWS ECR repo. One of the steps they instruct you to do is to run "docker tag" to tag a built image with a tag that includes a "fully-ish qualified" location of where the image is going to be stored in ECR.
I was working on migrating a script I had to Python API (instead of doing shell calls to the docker client). I'm unable to find the equivalent of "docker tag" in the API docs at https://docker-py.readthedocs.io/en/stable/images.html.
Can somebody point me in the right direction?
For those of you using ECR/ECS in AWS, here is an example of how you go about this.
Amazon provides instructions like this in ECR to push your images:
aws ecr get-login --no-include-email --region us-west-2
docker build -t myproj .
docker tag calclab:latest XXXXXXXXXX.dkr.ecr.us-west-2.amazonaws.com/myproj:latest
docker push XXXXXXXXXX.dkr.ecr.us-west-2.amazonaws.com/myproj:latest
Here is the rough equivalent using the Docker Python API and Boto (AWS's Python library). It includes tagging the image twice in ECR, so that I can track each image's version number while tracking what the latest is (so my ECS Task can, by default, always grab the most current image)
import docker
import boto3
def ecrDemo(version_number):
# The ECR Repository URI
repo = XXXXXXXXXX.dkr.ecr.us-west-2.amazonaws.com/myproj
# The name of the [profile] stored in .aws/credentials
profile = "sandbox"
# The region your ECR repo is in
region = "us-west-2"
# How you want to tag your project locally
local_tag = "myproj"
#Set up a session
session = boto3.Session(profile_name=profile, region_name=region)
ecr = session.client('ecr')
docker_api = docker.APIClient()
print "Building image " + local_tag
for line in docker_api.build(path='.', tag=local_tag, stream=True, \
dockerfile='./Dockerfile.myproj'):
process_docker_api_line(line)
# Make auth call and parse out results
auth = ecr.get_authorization_token()
token = auth["authorizationData"][0]["authorizationToken"]
username, password = b64decode(token).split(':')
endpoint = auth["authorizationData"][0]["proxyEndpoint"]
# print "Make authentication call"
# docker_api.login(username=user, password=password, \
# registry=endpoint, reauth=True)
auth_config_payload = {'username': username, 'password': password}
version_tag = repo + ':latest'
latest_tag = repo + ':' + version_number
print "Tagging version " + version_tag
if docker_api.tag(local_tag, version_tag) is False:
raise RuntimeError("Tag appeared to fail: " + version_tag)
print "Tagging latest " + latest_tag
if docker_api.tag(local_tag, latest_tag) is False:
raise RuntimeError("Tag appeared to fail: " + tag_latest)
print "Pushing to repo " + version_tag
for line in docker_api.push(version_tag, stream=True, auth_config=auth_config_payload):
self.process_docker_api_line(line)
print "Pushing to repo " + latest_tag
for line in docker_api.push(latest_tag, stream=True, auth_config=auth_config_payload):
self.process_docker_api_line(line)
print "Removing taged deployment images"
# You will still have the local_tag image if you need to troubleshoot
docker_api.remove_image(version_tag, force=True)
docker_api.remove_image(latest_tag, force=True)
def process_docker_api_line(payload):
""" Process the output from API stream, throw an Exception if there is an error """
# Sometimes Docker sends to "{}\n" blocks together...
for segment in payload.split('\n'):
line = segment.strip()
if line:
try:
line_payload = json.loads(line)
except ValueError as ex:
print "Could not decipher payload from API: " + ex.message
if line_payload:
if "errorDetail" in line_payload:
error = line_payload["errorDetail"]
sys.stderr.write(error["message"])
raise RuntimeError("Error on build - code " + `error["code"]`)
elif "stream" in line_payload:
sys.stdout.write(line_payload["stream"])
These are the steps you can use to build and tag an image.
import docker
tag = 'latest' # or whatever you want
client = docker.from_env()
# identifier of the image on your system
dockername = "%s:%s" % (<name of the image on your system>, <tag>)
# the target identifier
target = "%s:%d/%s" % (<registry address>, <registry_port>, <id or name of the image>)
# the url is usually unix://var/run/docker.sock' but depends on your environment
cli = docker.APIClient(base_url="<the daemon\'s url or socket>")
# build the image
cli.build(path=..., tag=dockername, pull=..., buildargs=...)
# tag it
image = client.images.get(dockername)
image.tag(target, tag=tag)
This answer helped me a great deal, thank you!
When I was developing my flow, I was confused by the terminology on the docker-py page, which I think would benefit by more examples.
I was not sure during development if I was correctly building, or if I was having issues with authentication or authorization.
I needed to carefully monitor my build results using the Docker CLI, and capture and analyze the output from the different build, tag and push functions.
Another caveat with getting output from these functions is a warning that is clearly stated for the docker-py pull() function but not the others: if you request that the function provide a generator for output from the operation you must consume that generator. I was able to get my flow working with a debug level of verbosity.
Unfortunately, when I toggled off the verbosity in my code and did not consume the generators for build() and push() (tag() only has a boolean result), my flow only appeared to work: it was not throwing errors, but it also was not building or pushing the code! It is better to either not turn on the streaming output if you're not in debug mode, or leave it on and use deque() to consume the output without processing it.
To summarize the differences in how tags are used:
build() takes a 'local tag', which is just the name of the build, e.g. 'myproj'
tag() applies a 'version tag' to a 'local tag' you just built with build(), the version tag will include the registry and a version label, e.g., myregistry.mydomain.com/myname/myproj:latest
push() will push the image in a 'version tag' to the registry designated in the version tag. So in this case, the image you tagged as myregistry.mydomain.com/myname/myproj:latest will be pushed to the registry myregistry.mydomain.com.

Getting started with secure AWS CloudFront streaming with Python

I have created a S3 bucket, uploaded a video, created a streaming distribution in CloudFront. Tested it with a static HTML player and it works. I have created a keypair through the account settings. I have the private key file sitting on my desktop at the moment. That's where I am.
My aim is to get to a point where my Django/Python site creates secure URLs and people can't access the videos unless they've come from one of my pages. The problem is I'm allergic to the way Amazon have laid things out and I'm just getting more and more confused.
I realise this isn't going to be the best question on StackOverflow but I'm certain I can't be the only fool out here that can't make heads or tails out of how to set up a secure CloudFront/S3 situation. I would really appreciate your help and am willing (once two days has passed) give a 500pt bounty to the best answer.
I have several questions that, once answered, should fit into one explanation of how to accomplish what I'm after:
In the documentation (there's an example in the next point) there's lots of XML lying around telling me I need to POST things to various places. Is there an online console for doing this? Or do I literally have to force this up via cURL (et al)?
How do I create a Origin Access Identity for CloudFront and bind it to my distribution? I've read this document but, per the first point, don't know what to do with it. How does my keypair fit into this?
Once that's done, how do I limit the S3 bucket to only allow people to download things through that identity? If this is another XML jobby rather than clicking around the web UI, please tell me where and how I'm supposed to get this into my account.
In Python, what's the easiest way of generating an expiring URL for a file. I have boto installed but I don't see how to get a file from a streaming distribution.
Are there are any applications or scripts that can take the difficulty of setting this garb up? I use Ubuntu (Linux) but I have XP in a VM if it's Windows-only. I've already looked at CloudBerry S3 Explorer Pro - but it makes about as much sense as the online UI.
You're right, it takes a lot of API work to get this set up. I hope they get it in the AWS Console soon!
UPDATE: I have submitted this code to boto - as of boto v2.1 (released 2011-10-27) this gets much easier. For boto < 2.1, use the instructions here. For boto 2.1 or greater, get the updated instructions on my blog: http://www.secretmike.com/2011/10/aws-cloudfront-secure-streaming.html Once boto v2.1 gets packaged by more distros I'll update the answer here.
To accomplish what you want you need to perform the following steps which I will detail below:
Create your s3 bucket and upload some objects (you've already done this)
Create a Cloudfront "Origin Access Identity" (basically an AWS account to allow cloudfront to access your s3 bucket)
Modify the ACLs on your objects so that only your Cloudfront Origin Access Identity is allowed to read them (this prevents people from bypassing Cloudfront and going direct to s3)
Create a cloudfront distribution with basic URLs and one which requires signed URLs
Test that you can download objects from basic cloudfront distribution but not from s3 or the signed cloudfront distribution
Create a key pair for signing URLs
Generate some URLs using Python
Test that the signed URLs work
1 - Create Bucket and upload object
The easiest way to do this is through the AWS Console but for completeness I'll show how using boto. Boto code is shown here:
import boto
#credentials stored in environment AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
s3 = boto.connect_s3()
#bucket name MUST follow dns guidelines
new_bucket_name = "stream.example.com"
bucket = s3.create_bucket(new_bucket_name)
object_name = "video.mp4"
key = bucket.new_key(object_name)
key.set_contents_from_filename(object_name)
2 - Create a Cloudfront "Origin Access Identity"
For now, this step can only be performed using the API. Boto code is here:
import boto
#credentials stored in environment AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
cf = boto.connect_cloudfront()
oai = cf.create_origin_access_identity(comment='New identity for secure videos')
#We need the following two values for later steps:
print("Origin Access Identity ID: %s" % oai.id)
print("Origin Access Identity S3CanonicalUserId: %s" % oai.s3_user_id)
3 - Modify the ACLs on your objects
Now that we've got our special S3 user account (the S3CanonicalUserId we created above) we need to give it access to our s3 objects. We can do this easily using the AWS Console by opening the object's (not the bucket's!) Permissions tab, click the "Add more permissions" button, and pasting the very long S3CanonicalUserId we got above into the "Grantee" field of a new. Make sure you give the new permission "Open/Download" rights.
You can also do this in code using the following boto script:
import boto
#credentials stored in environment AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
s3 = boto.connect_s3()
bucket_name = "stream.example.com"
bucket = s3.get_bucket(bucket_name)
object_name = "video.mp4"
key = bucket.get_key(object_name)
#Now add read permission to our new s3 account
s3_canonical_user_id = "<your S3CanonicalUserID from above>"
key.add_user_grant("READ", s3_canonical_user_id)
4 - Create a cloudfront distribution
Note that custom origins and private distributions are not fully supported in boto until version 2.0 which has not been formally released at time of writing. The code below pulls out some code from the boto 2.0 branch and hacks it together to get it going but it's not pretty. The 2.0 branch handles this much more elegantly - definitely use that if possible!
import boto
from boto.cloudfront.distribution import DistributionConfig
from boto.cloudfront.exception import CloudFrontServerError
import re
def get_domain_from_xml(xml):
results = re.findall("<DomainName>([^<]+)</DomainName>", xml)
return results[0]
#custom class to hack this until boto v2.0 is released
class HackedStreamingDistributionConfig(DistributionConfig):
def __init__(self, connection=None, origin='', enabled=False,
caller_reference='', cnames=None, comment='',
trusted_signers=None):
DistributionConfig.__init__(self, connection=connection,
origin=origin, enabled=enabled,
caller_reference=caller_reference,
cnames=cnames, comment=comment,
trusted_signers=trusted_signers)
#override the to_xml() function
def to_xml(self):
s = '<?xml version="1.0" encoding="UTF-8"?>\n'
s += '<StreamingDistributionConfig xmlns="http://cloudfront.amazonaws.com/doc/2010-07-15/">\n'
s += ' <S3Origin>\n'
s += ' <DNSName>%s</DNSName>\n' % self.origin
if self.origin_access_identity:
val = self.origin_access_identity
s += ' <OriginAccessIdentity>origin-access-identity/cloudfront/%s</OriginAccessIdentity>\n' % val
s += ' </S3Origin>\n'
s += ' <CallerReference>%s</CallerReference>\n' % self.caller_reference
for cname in self.cnames:
s += ' <CNAME>%s</CNAME>\n' % cname
if self.comment:
s += ' <Comment>%s</Comment>\n' % self.comment
s += ' <Enabled>'
if self.enabled:
s += 'true'
else:
s += 'false'
s += '</Enabled>\n'
if self.trusted_signers:
s += '<TrustedSigners>\n'
for signer in self.trusted_signers:
if signer == 'Self':
s += ' <Self/>\n'
else:
s += ' <AwsAccountNumber>%s</AwsAccountNumber>\n' % signer
s += '</TrustedSigners>\n'
if self.logging:
s += '<Logging>\n'
s += ' <Bucket>%s</Bucket>\n' % self.logging.bucket
s += ' <Prefix>%s</Prefix>\n' % self.logging.prefix
s += '</Logging>\n'
s += '</StreamingDistributionConfig>\n'
return s
def create(self):
response = self.connection.make_request('POST',
'/%s/%s' % ("2010-11-01", "streaming-distribution"),
{'Content-Type' : 'text/xml'},
data=self.to_xml())
body = response.read()
if response.status == 201:
return body
else:
raise CloudFrontServerError(response.status, response.reason, body)
cf = boto.connect_cloudfront()
s3_dns_name = "stream.example.com.s3.amazonaws.com"
comment = "example streaming distribution"
oai = "<OAI ID from step 2 above like E23KRHS6GDUF5L>"
#Create a distribution that does NOT need signed URLS
hsd = HackedStreamingDistributionConfig(connection=cf, origin=s3_dns_name, comment=comment, enabled=True)
hsd.origin_access_identity = oai
basic_dist = hsd.create()
print("Distribution with basic URLs: %s" % get_domain_from_xml(basic_dist))
#Create a distribution that DOES need signed URLS
hsd = HackedStreamingDistributionConfig(connection=cf, origin=s3_dns_name, comment=comment, enabled=True)
hsd.origin_access_identity = oai
#Add some required signers (Self means your own account)
hsd.trusted_signers = ['Self']
signed_dist = hsd.create()
print("Distribution with signed URLs: %s" % get_domain_from_xml(signed_dist))
5 - Test that you can download objects from cloudfront but not from s3
You should now be able to verify:
stream.example.com.s3.amazonaws.com/video.mp4 - should give AccessDenied
signed_distribution.cloudfront.net/video.mp4 - should give MissingKey (because the URL is not signed)
basic_distribution.cloudfront.net/video.mp4 - should work fine
The tests will have to be adjusted to work with your stream player, but the basic idea is that only the basic cloudfront url should work.
6 - Create a keypair for CloudFront
I think the only way to do this is through Amazon's web site. Go into your AWS "Account" page and click on the "Security Credentials" link. Click on the "Key Pairs" tab then click "Create a New Key Pair". This will generate a new key pair for you and automatically download a private key file (pk-xxxxxxxxx.pem). Keep the key file safe and private. Also note down the "Key Pair ID" from amazon as we will need it in the next step.
7 - Generate some URLs in Python
As of boto version 2.0 there does not seem to be any support for generating signed CloudFront URLs. Python does not include RSA encryption routines in the standard library so we will have to use an additional library. I've used M2Crypto in this example.
For a non-streaming distribution, you must use the full cloudfront URL as the resource, however for streaming we only use the object name of the video file. See the code below for a full example of generating a URL which only lasts for 5 minutes.
This code is based loosely on the PHP example code provided by Amazon in the CloudFront documentation.
from M2Crypto import EVP
import base64
import time
def aws_url_base64_encode(msg):
msg_base64 = base64.b64encode(msg)
msg_base64 = msg_base64.replace('+', '-')
msg_base64 = msg_base64.replace('=', '_')
msg_base64 = msg_base64.replace('/', '~')
return msg_base64
def sign_string(message, priv_key_string):
key = EVP.load_key_string(priv_key_string)
key.reset_context(md='sha1')
key.sign_init()
key.sign_update(str(message))
signature = key.sign_final()
return signature
def create_url(url, encoded_signature, key_pair_id, expires):
signed_url = "%(url)s?Expires=%(expires)s&Signature=%(encoded_signature)s&Key-Pair-Id=%(key_pair_id)s" % {
'url':url,
'expires':expires,
'encoded_signature':encoded_signature,
'key_pair_id':key_pair_id,
}
return signed_url
def get_canned_policy_url(url, priv_key_string, key_pair_id, expires):
#we manually construct this policy string to ensure formatting matches signature
canned_policy = '{"Statement":[{"Resource":"%(url)s","Condition":{"DateLessThan":{"AWS:EpochTime":%(expires)s}}}]}' % {'url':url, 'expires':expires}
#now base64 encode it (must be URL safe)
encoded_policy = aws_url_base64_encode(canned_policy)
#sign the non-encoded policy
signature = sign_string(canned_policy, priv_key_string)
#now base64 encode the signature (URL safe as well)
encoded_signature = aws_url_base64_encode(signature)
#combine these into a full url
signed_url = create_url(url, encoded_signature, key_pair_id, expires);
return signed_url
def encode_query_param(resource):
enc = resource
enc = enc.replace('?', '%3F')
enc = enc.replace('=', '%3D')
enc = enc.replace('&', '%26')
return enc
#Set parameters for URL
key_pair_id = "APKAIAZCZRKVIO4BQ" #from the AWS accounts page
priv_key_file = "cloudfront-pk.pem" #your private keypair file
resource = 'video.mp4' #your resource (just object name for streaming videos)
expires = int(time.time()) + 300 #5 min
#Create the signed URL
priv_key_string = open(priv_key_file).read()
signed_url = get_canned_policy_url(resource, priv_key_string, key_pair_id, expires)
#Flash player doesn't like query params so encode them
enc_url = encode_query_param(signed_url)
print(enc_url)
8 - Try out the URLs
Hopefully you should now have a working URL which looks something like this:
video.mp4%3FExpires%3D1309979985%26Signature%3DMUNF7pw1689FhMeSN6JzQmWNVxcaIE9mk1x~KOudJky7anTuX0oAgL~1GW-ON6Zh5NFLBoocX3fUhmC9FusAHtJUzWyJVZLzYT9iLyoyfWMsm2ylCDBqpy5IynFbi8CUajd~CjYdxZBWpxTsPO3yIFNJI~R2AFpWx8qp3fs38Yw_%26Key-Pair-Id%3DAPKAIAZRKVIO4BQ
Put this into your js and you should have something which looks like this (from the PHP example in Amazon's CloudFront documentation):
var so_canned = new SWFObject('http://location.domname.com/~jvngkhow/player.swf','mpl','640','360','9');
so_canned.addParam('allowfullscreen','true');
so_canned.addParam('allowscriptaccess','always');
so_canned.addParam('wmode','opaque');
so_canned.addVariable('file','video.mp4%3FExpires%3D1309979985%26Signature%3DMUNF7pw1689FhMeSN6JzQmWNVxcaIE9mk1x~KOudJky7anTuX0oAgL~1GW-ON6Zh5NFLBoocX3fUhmC9FusAHtJUzWyJVZLzYT9iLyoyfWMsm2ylCDBqpy5IynFbi8CUajd~CjYdxZBWpxTsPO3yIFNJI~R2AFpWx8qp3fs38Yw_%26Key-Pair-Id%3DAPKAIAZRKVIO4BQ');
so_canned.addVariable('streamer','rtmp://s3nzpoyjpct.cloudfront.net/cfx/st');
so_canned.write('canned');
Summary
As you can see, not very easy! boto v2 will help a lot setting up the distribution. I will find out if it's possible to get some URL generation code in there as well to improve this great library!
In Python, what's the easiest way of generating an expiring URL for a file. I have boto installed but I don't see how to get a file from a streaming distribution.
You can generate a expiring signed-URL for the resource. Boto3 documentation has a nice example solution for that:
import datetime
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives import serialization
from cryptography.hazmat.primitives.asymmetric import padding
from botocore.signers import CloudFrontSigner
def rsa_signer(message):
with open('path/to/key.pem', 'rb') as key_file:
private_key = serialization.load_pem_private_key(
key_file.read(),
password=None,
backend=default_backend()
)
signer = private_key.signer(padding.PKCS1v15(), hashes.SHA1())
signer.update(message)
return signer.finalize()
key_id = 'AKIAIOSFODNN7EXAMPLE'
url = 'http://d2949o5mkkp72v.cloudfront.net/hello.txt'
expire_date = datetime.datetime(2017, 1, 1)
cloudfront_signer = CloudFrontSigner(key_id, rsa_signer)
# Create a signed url that will be valid until the specfic expiry date
# provided using a canned policy.
signed_url = cloudfront_signer.generate_presigned_url(
url, date_less_than=expire_date)
print(signed_url)

Categories

Resources