I use Python Social Auth - Django to log in my users.
My backend is Microsoft, so I can use Microsoft Graph but I don't think that it is relevant.
Python Social Auth deals with authentication but now I want to call the API and for that, I need a valid access token.
Following the use cases I can get to this:
social = request.user.social_auth.get(provider='azuread-oauth2')
response = self.get_json('https://graph.microsoft.com/v1.0/me',
headers={'Authorization': social.extra_data['token_type'] + ' '
+ social.extra_data['access_token']})
But the access token is only valid for 3600 seconds and so I need to refresh, I guess I can do it manually but there must be a better solution.
How can I get an access_token refreshed?
.get_access_token(strategy) refresh the token automatically if it's expired. You can use it like that:
from social_django.utils import load_strategy
#...
social = request.user.social_auth.get(provider='google-oauth2')
access_token = social.get_access_token(load_strategy())
Using load_strategy() at social.apps.django_app.utils:
social = request.user.social_auth.get(provider='azuread-oauth2')
strategy = load_strategy()
social.refresh_token(strategy)
Now the updated access_token can be retrieved from social.extra_data['access_token'].
The best approach is probably to check if it needs to be updated (customized for AzureAD Oauth2):
def get_azuread_oauth2_token(user):
social = user.social_auth.get(provider='azuread-oauth2')
if social.extra_data['expires_on'] <= int(time.time()):
strategy = load_strategy()
social.refresh_token(strategy)
return social.extra_data['access_token']
This is based on the method get_auth_tokenfrom AzureADOAuth2. I don't think this method is accessible outside the pipeline, please answer this question if there is any way to do it.
Updates
Update 1 - 20/01/2017
Following an Issue to request an extra data parameter with the time of the access token refresh, it is now possible to check if the access_token needs to be updated in every backend.
In future versions (>0.2.1 for the social-auth-core) there will be a new field in extra data:
'auth_time': int(time.time())
And so this works:
def get_token(user, provider):
social = user.social_auth.get(provider=provider)
if (social.extra_data['auth_time'] + social.extra_data['expires']) <= int(time.time()):
strategy = load_strategy()
social.refresh_token(strategy)
return social.extra_data['access_token']
Note: According to OAuth 2 RFC all responses should (it's a RECOMMENDED param) provide an expires_in but for most backends (including the azuread-oauth2) this value is being saved as expires. Be careful to understand how your backend behaves!
An Issue on this exists and I will be update the answer with the relevant info when it exists.
Update 2 - 17/02/17
Additionally, there is a method in UserMixin called access_token_expired (code) that can be used to assert if the token is valid or not (note: this method doesn't work for race conditions, as pointed out in this anwser by #SCasey).
Update 3 - 31/05/17
In Python Social Auth - Core v1.3.0 get_access_token(self, strategy) was introduced in storage.py.
So now:
from social_django.utils import load_strategy
social = request.user.social_auth.get(provider='azuread-oauth2')
response = self.get_json('https://graph.microsoft.com/v1.0/me',
headers={'Authorization': '%s %s' % (social.extra_data['token_type'],
social.get_access_token(load_strategy())}
Thanks #damio for pointing it out.
#NBajanca's update is almost correct for version 1.0.1.
extra_data['expires_in']
is now
extra_data['expires']
So the code is:
def get_token(user, provider):
social = user.social_auth.get(provider=provider)
if (social.extra_data['auth_time'] + social.extra_data['expires']) <= int(time.time()):
strategy = load_strategy()
social.refresh_token(strategy)
return social.extra_data['access_token']
I'd also recommend subtracting an arbitrary amount of time from that calc, so that we don't run into a race situation where we've checked the token 0.01s before expiry and then get an error because we sent the request after expiry. I like to add 10 seconds just to be safe, but it's probably overkill:
def get_token(user, provider):
social = user.social_auth.get(provider=provider)
if (social.extra_data['auth_time'] + social.extra_data['expires'] - 10) <= int(time.time()):
strategy = load_strategy()
social.refresh_token(strategy)
return social.extra_data['access_token']
EDIT
#NBajanca points out that expires_in is technically correct per the Oauth2 docs. It seems that for some backends, this may work. The code above using expires is what works with provider="google-oauth2" as of v1.0.1
Related
I have looked through the FTX api documentation found here: https://docs.ftx.us/#overview
And I've looked at example code found in this repo: https://github.com/ftexchange/ftx/tree/master/rest
I can't 'get' or 'post' anything that requires the Authentication. I am using the api key on my account that has 'full trade permissions', and when I look at: print(request.headers) the headers look like they are in the right format.
I've tried: using google colab instead of vs code, updating all my libraries, generating a new api key, restarting kernel and computer. I can pull something like 'markets' because it doesn't need the Authentication.
Let me know if you need any more information, below is a portion of the code I have that isolates the problem and returns {'success': False, 'error': 'Not logged in'}
import time
import urllib.parse
from typing import Optional, Dict, Any, List
from requests import Request, Session, Response
import hmac
ep = 'https://ftx.us/api/wallet/balances'
ts = int(time.time() * 1000)
s = Session()
request = Request('GET', ep)
prepared = request.prepare()
signature_payload = f'{ts}{prepared.method}{prepared.path_url}'.encode()
if prepared.body:
signature_payload += prepared.body
signature = hmac.new(secret.encode(), signature_payload, 'sha256').hexdigest()
request.headers['FTX-KEY'] = key
request.headers['FTX-SIGN'] = signature
request.headers['FTX-TS'] = str(ts)
response = s.send(prepared)
data = response.json()
print(data)
I've faced with the same problem.
You need to change this part:
prepared.headers['FTX-KEY'] = key
prepared.headers['FTX-SIGN'] = signature
prepared.headers['FTX-TS'] = str(ts)
PS. I believe that the FTX needs to fix their API documentation
PSS. I've checked the a part of https://github.com/ftexchange/ftx/tree/master/rest code. I beleave FTX guys just do a copy-paste into docs this code but originally it belongs to more a sophisticated object oriented solution that will work correctly because they pass into method an already created request and use a prepared variable just to calculate path_url and method
For ftx.us, you need to use different headers:
prepared.headers['FTXUS-KEY'] = key
prepared.headers['FTXUS-TS'] = str(ts)
prepared.headers['FTXUS-SIGN'] = signature
I am working in a project and I am trying to get the access token to use the DocuSign API, but the call to get the oauth userinfo does not work.
Authentication type: JWT Grant
Python SDK: docusign-python-client
The code:
class BaseDocusign:
api_client = None
_token_received = False
expiresTimestamp = 0
account = None
def __init__(self):
BaseDocusign.api_client = ApiClient()
def token_is_expired(self):
current_time = int(round(time.time()))
return (current_time + DOCUSIGN_EXPIRE_TIME - 10) > BaseDocusign.expiresTimestamp
def check_token(self):
if not BaseDocusign._token_received or self.token_is_expired():
self.update_token()
def update_token(self):
client = BaseDocusign.api_client
client.request_jwt_user_token(
DOCUSIGN_CLIENT_ID,
DOCUSIGN_ACCOUNT_ID,
DOCUSIGN_AUTH_SERVER,
DOCUSIGN_PRIVATE_KEY,
DOCUSIGN_EXPIRE_TIME
)
if BaseDocusign.account is None:
account = self.get_account_info(client)
print account
BaseDocusign._token_received = True
BaseDocusign.expiresTimestamp = (int(round(time.time())) + DOCUSIGN_EXPIRE_TIME)
def get_account_info(self, client):
client.host = DOCUSIGN_AUTH_SERVER
response = client.call_api("/oauth/userinfo", "GET", response_type="object")
if len(response) > 1 and 200 > response[1] > 300:
raise Exception("can not get user info: %d".format(response[1]))
accounts = response[0]['accounts']
target = target_account_id
if target is None or target == "FALSE":
# Look for default
for acct in accounts:
if acct['is_default']:
return acct
# Look for specific account
for acct in accounts:
if acct['account_id'] == target:
return acct
raise Exception("User does not have access to account {target}\n")
When I run it:
a = BaseDocusign()
a.update_token()
The access token is generated:
{"access_token":"eyJ0eXAiOiJNVCIsImFsZyI6IlJTMjU2Iiwia2lkIjoiNjgxODVmZjEtNGU1MS00Y2U5LWFmMWMtNjg5ODEyMjAzMzE3In0.AQkAAAABAAsADQAkAAAAZjczYjYxMmMtOGI3Ny00YjRjLWFkZTQtZTI0ZWEyYjY4MTEwIgAkAAAAZjczYjYxMmMtOGI3Ny00YjRjLWFkZTQtZTI0ZWEyYjY4MTEwBwAAq89LFJXXSAgAAOvyWVeV10gLAB8AAABodHRwczovL2FjY291bnQtZC5kb2N1c2lnbi5jb20vDAAkAAAAZjczYjYxMmMtOGI3Ny00YjRjLWFkZTQtZTI0ZWEyYjY4MTEwGAABAAAABQAAABIAAQAAAAYAAABqd3RfYnI.f_XW63iL5ABts-gq48ciWKQnaYyNiIEG9rC_CpnyWo0Hzf-B_G3hIRUWJzD1Yiyyy4pKm_8-zoalsoqANcMeXsjwBTCMlXIhc216ZWa6nHR6CheRbfTHM6bJ1LKwRdmnpwLywu_qiqrEwEOlZkwH_GzSSP9piUtpCmhgdZY1GFnG2u9JU_3jd8nKN87PE_cn2sjD3fNMRHQXjnPeHPyBZpC171TyuEvQFKCbV5QOwiVXmZbE9Aa_unC-xXvvJ2cA3daVaUBHoasXUxo5CZDNb9aDxtQkn5GCgQL7JChL7XAfrgXAQMOb-rEzocBpPJKHl6chBNiFcl-gfFWw2naomA","token_type":"Application","expires_in":28800}
But when try to get the account info, the call fails:
{"error":"internal_server_error","reference_id":"f20e360c-185d-463e-9f0b-ce95f38fe711"}
To do this, I call to the get_account_info function and it calls to the endpoint oauth/userinfo, but the call fails.
response = client.call_api("/oauth/userinfo", "GET", response_type="object")
# Response: {"error":"internal_server_error","reference_id":"f20e360c-185d-463e-9f0b-ce95f38fe711"}
To do this example, I need the variable account_id and according to this example, the get_account_info function gets it.
I have also tried to do what the web says (step4) to get user information and the answer is:
curl --request GET https://account-d.docusign.com/oauth/userinfo--header "Authorization: Bearer eyJ0eXAiOiJNVCIsImFsZyI6IlJTMjU2Iiwia2lkIjoiNjgxODVmZjEtNGU1MS00Y2U5LWFmMWMtNjg5ODEyMjAzMzE3In0.AQoAAAABAAUABwAAYWSFlJrXSAgAAMko55ya10gCAP-ftnA70YROvfpqFSh7j7kVAAEAAAAYAAEAAAAFAAAADQAkAAAAZjczYjYxMmMtOGI3Ny00YjRjLWFkZTQtZTI0ZWEyYjY4MTEwIgAkAAAAZjczYjYxMmMtOGI3Ny00YjRjLWFkZTQtZTI0ZWEyYjY4MTEwEgABAAAABgAAAGp3dF9iciMAJAAAAGY3M2I2MTJjLThiNzctNGI0Yy1hZGU0LWUyNGVhMmI2ODExMA.YHFoD2mQbwh8rdiPi8swg9kO9srlDyJcpqUo8XI5tdZki2I_Nla-qb9VaD4gAy8tSXVSY7unRjfClFDAqC8Ur73caHuZo7tN5tIKmXi6C3VzPWPGFJtsceKNEGMqwznw6OBVuPQG0IGlRjXK37Ur1nILLUWKb7w6O5Uz6y0e5uR8sxzZWh1adm2zHqd6khiQuAFB9vG2sS3jaudtck1qV6HRB_kARvUie1zglvHydc42Nc_o5GtIm3sGrqW7rio3YpHVX39nTKM-28kjOvPSNwzXp3IlZtaxuB6EdexrECH19nIaNbCe29LrdpzreRMyjEwwM309bOaKJ1KV82NbTQ"
# Response
<html><head><title>Object moved</title></head><body>
<h2>Object moved to here.</h2>
</body></html>
curl: (3) URL using bad/illegal format or missing URL
Thanks for all :)
Just looking into the code, the return is "acct", a dictionary. So you need to use account['account_id']
I found this full example: https://github.com/docusign/eg-01-python-jwt
And in here: https://github.com/docusign/eg-01-python-jwt/blob/master/example_base.py#L44
you see how they are passing the account_id
Hopefully this helps. good luck
You must use request_jwt_user_token not request_jwt_application_token
See the code example: https://github.com/docusign/eg-01-python-jwt/blob/master/example_base.py#L34
request_jwt_application_token is only for some of the DocuSign organization APIs.
Added
From the comment:
I have changed the call to request_jwt_user_token and I get another token, but it still fails. The response is {"error":"internal_server_error","reference_id":"846114d0-1bcd-47a6-ba23-317049b54d00"}
Answer:
You're calling the /oauth/userinfo API method. But the Authorization header was not included.
One way is to set the Authorization explicitly:
client.set_default_header("Authorization", "Bearer " + ds_access_token)
In your case, the SDK is supposed to set it for you. It could be that you're using a new client object, an older SDK version, or some other issue.
I just downloaded the eg-01-python-jwt code example repo and it worked fine. I suggest that you download the example app and get it running first, then update the app to your needs.
Also, check the version of the Python SDK you're using:
pip3 show docusign_esign
Name: docusign-esign
Version: 3.0.0
Summary: DocuSign REST API
...
Location: /usr/local/lib/python3.7/site-packages
...
This error can be also caused by using an expired token (use the refresh end point to get a new one)
I've been asked to deal with an external REST API (Zendesk's, in fact) whose credentials need to be formatted as {email}/token:{security_token} -- a single value rather than the usual username/password pair. I'm trying to use the Python requests module for this task, since it's Pythonic and doesn't hurt my brain too badly, but I'm not sure how to format the authentication credentials. The Zendesk documentation only gives access examples using curl, which I'm unfamiliar with.
Here's how I currently have requests.auth.AuthBase subclassed:
class ZDTokenAuth(requests.auth.AuthBase):
def __init__(self,username,token):
self.username = username
self.token = token
def __call__(self,r):
auth_string = self.username + "/token:" + self.token
auth_string = auth_string.encode('utf-8')
r.headers['Authorization'] = base64.b64encode(auth_string)
return r
I'm not sure that the various encodings are required, but that's how someone did it on github (https://github.com/skipjac/Zendesk-python-api/blob/master/zendesk-ticket-delete.py) so why not. I've tried it without the encoding too, of course - same result.
Here's the class and methods I'm using to test this:
class ZDStats(object):
api_base = "https://mycompany.zendesk.com/api/v2/"
def __init__(self,zd_auth):
self.zd_auth = zd_auth # this is assumed to be a ZDTokenAuth object
def testCredentials(self):
zd_users_url = self.api_base + "users.json"
zdreq = requests.get(zd_users_url, auth=self.zdauth)
return zdreq
This is called with:
credentials = ZDTokenAuth(zd_username,zd_apitoken)
zd = ZDStats(credentials)
users = zd.testCredentials()
print(users.status_code)
print(users.text)
The status code I'm getting back is a 401, and the text is simply {"error":"Couldn't authenticate you."}. Clearly I'm doing something wrong here, but I don't think I know enough to know what it is I'm doing wrong, if that makes sense. Any ideas?
What you're missing is the auth type. Your Authorization header should be created like this:
r.headers['Authorization'] = b"Basic " + base64.b64encode(auth_string)
You can also achieve the same passing by a tuple as auth parameter with:
requests.get(url, auth=(username+"/token", token))
I'm using the python-ldap library to connect to our LDAP server and run queries. The issue I'm running into is that despite setting a size limit on the search, I keep getting SIZELIMIT_EXCEEDED errors on any query that would return too many results. I know that the query itself is working because I will get a result if the query returns a small subset of users. Even if I set the size limit to something absurd, like 1, I'll still get a SIZELIMIT_EXCEEDED on those bigger queries. I've pasted a generic version of my query below. Any ideas as to what I'm doing wrong here?
result = self.ldap.search_ext_s(self.base, self.scope, '(personFirstMiddle=<value>*)', sizelimit=5)
When the LDAP client requests a size-limit, that is called a 'client-requested' size limit. A client-requested size limit cannot override the size-limit set by the server. The server may set a size-limit for the server as a whole, for a particular authorization identity, or for other reasons - whichever the case, the client may not override the server size limit. The search request may have to be issued in multiple parts using the simple paged results control or the virtual list view control.
Here's a Python3 implementation that I came up with after heavily editing what I found here and in the official documentation. At the time of writing this it works with the pip3 package python-ldap version 3.2.0.
def get_list_of_ldap_users():
hostname = "google.com"
username = "username_here"
password = "password_here"
base = "dc=google,dc=com"
print(f"Connecting to the LDAP server at '{hostname}'...")
connect = ldap.initialize(f"ldap://{hostname}")
connect.set_option(ldap.OPT_REFERRALS, 0)
connect.simple_bind_s(username, password)
connect=ldap_server
search_flt = "(personFirstMiddle=<value>*)" # get all users with a specific middle name
page_size = 500 # how many users to search for in each page, this depends on the server maximum setting (default is 1000)
searchreq_attrlist=["cn", "sn", "name", "userPrincipalName"] # change these to the attributes you care about
req_ctrl = SimplePagedResultsControl(criticality=True, size=page_size, cookie='')
msgid = connect.search_ext(base=base, scope=ldap.SCOPE_SUBTREE, filterstr=search_flt, attrlist=searchreq_attrlist, serverctrls=[req_ctrl])
total_results = []
pages = 0
while True: # loop over all of the pages using the same cookie, otherwise the search will fail
pages += 1
rtype, rdata, rmsgid, serverctrls = connect.result3(msgid)
for user in rdata:
total_results.append(user)
pctrls = [c for c in serverctrls if c.controlType == SimplePagedResultsControl.controlType]
if pctrls:
if pctrls[0].cookie: # Copy cookie from response control to request control
req_ctrl.cookie = pctrls[0].cookie
msgid = connect.search_ext(base=base, scope=ldap.SCOPE_SUBTREE, filterstr=search_flt, attrlist=searchreq_attrlist, serverctrls=[req_ctrl])
else:
break
else:
break
return total_results
This will return a list of all users but you can edit it as required to return what you want without hitting the SIZELIMIT_EXCEEDED issue :)
I have created a S3 bucket, uploaded a video, created a streaming distribution in CloudFront. Tested it with a static HTML player and it works. I have created a keypair through the account settings. I have the private key file sitting on my desktop at the moment. That's where I am.
My aim is to get to a point where my Django/Python site creates secure URLs and people can't access the videos unless they've come from one of my pages. The problem is I'm allergic to the way Amazon have laid things out and I'm just getting more and more confused.
I realise this isn't going to be the best question on StackOverflow but I'm certain I can't be the only fool out here that can't make heads or tails out of how to set up a secure CloudFront/S3 situation. I would really appreciate your help and am willing (once two days has passed) give a 500pt bounty to the best answer.
I have several questions that, once answered, should fit into one explanation of how to accomplish what I'm after:
In the documentation (there's an example in the next point) there's lots of XML lying around telling me I need to POST things to various places. Is there an online console for doing this? Or do I literally have to force this up via cURL (et al)?
How do I create a Origin Access Identity for CloudFront and bind it to my distribution? I've read this document but, per the first point, don't know what to do with it. How does my keypair fit into this?
Once that's done, how do I limit the S3 bucket to only allow people to download things through that identity? If this is another XML jobby rather than clicking around the web UI, please tell me where and how I'm supposed to get this into my account.
In Python, what's the easiest way of generating an expiring URL for a file. I have boto installed but I don't see how to get a file from a streaming distribution.
Are there are any applications or scripts that can take the difficulty of setting this garb up? I use Ubuntu (Linux) but I have XP in a VM if it's Windows-only. I've already looked at CloudBerry S3 Explorer Pro - but it makes about as much sense as the online UI.
You're right, it takes a lot of API work to get this set up. I hope they get it in the AWS Console soon!
UPDATE: I have submitted this code to boto - as of boto v2.1 (released 2011-10-27) this gets much easier. For boto < 2.1, use the instructions here. For boto 2.1 or greater, get the updated instructions on my blog: http://www.secretmike.com/2011/10/aws-cloudfront-secure-streaming.html Once boto v2.1 gets packaged by more distros I'll update the answer here.
To accomplish what you want you need to perform the following steps which I will detail below:
Create your s3 bucket and upload some objects (you've already done this)
Create a Cloudfront "Origin Access Identity" (basically an AWS account to allow cloudfront to access your s3 bucket)
Modify the ACLs on your objects so that only your Cloudfront Origin Access Identity is allowed to read them (this prevents people from bypassing Cloudfront and going direct to s3)
Create a cloudfront distribution with basic URLs and one which requires signed URLs
Test that you can download objects from basic cloudfront distribution but not from s3 or the signed cloudfront distribution
Create a key pair for signing URLs
Generate some URLs using Python
Test that the signed URLs work
1 - Create Bucket and upload object
The easiest way to do this is through the AWS Console but for completeness I'll show how using boto. Boto code is shown here:
import boto
#credentials stored in environment AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
s3 = boto.connect_s3()
#bucket name MUST follow dns guidelines
new_bucket_name = "stream.example.com"
bucket = s3.create_bucket(new_bucket_name)
object_name = "video.mp4"
key = bucket.new_key(object_name)
key.set_contents_from_filename(object_name)
2 - Create a Cloudfront "Origin Access Identity"
For now, this step can only be performed using the API. Boto code is here:
import boto
#credentials stored in environment AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
cf = boto.connect_cloudfront()
oai = cf.create_origin_access_identity(comment='New identity for secure videos')
#We need the following two values for later steps:
print("Origin Access Identity ID: %s" % oai.id)
print("Origin Access Identity S3CanonicalUserId: %s" % oai.s3_user_id)
3 - Modify the ACLs on your objects
Now that we've got our special S3 user account (the S3CanonicalUserId we created above) we need to give it access to our s3 objects. We can do this easily using the AWS Console by opening the object's (not the bucket's!) Permissions tab, click the "Add more permissions" button, and pasting the very long S3CanonicalUserId we got above into the "Grantee" field of a new. Make sure you give the new permission "Open/Download" rights.
You can also do this in code using the following boto script:
import boto
#credentials stored in environment AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
s3 = boto.connect_s3()
bucket_name = "stream.example.com"
bucket = s3.get_bucket(bucket_name)
object_name = "video.mp4"
key = bucket.get_key(object_name)
#Now add read permission to our new s3 account
s3_canonical_user_id = "<your S3CanonicalUserID from above>"
key.add_user_grant("READ", s3_canonical_user_id)
4 - Create a cloudfront distribution
Note that custom origins and private distributions are not fully supported in boto until version 2.0 which has not been formally released at time of writing. The code below pulls out some code from the boto 2.0 branch and hacks it together to get it going but it's not pretty. The 2.0 branch handles this much more elegantly - definitely use that if possible!
import boto
from boto.cloudfront.distribution import DistributionConfig
from boto.cloudfront.exception import CloudFrontServerError
import re
def get_domain_from_xml(xml):
results = re.findall("<DomainName>([^<]+)</DomainName>", xml)
return results[0]
#custom class to hack this until boto v2.0 is released
class HackedStreamingDistributionConfig(DistributionConfig):
def __init__(self, connection=None, origin='', enabled=False,
caller_reference='', cnames=None, comment='',
trusted_signers=None):
DistributionConfig.__init__(self, connection=connection,
origin=origin, enabled=enabled,
caller_reference=caller_reference,
cnames=cnames, comment=comment,
trusted_signers=trusted_signers)
#override the to_xml() function
def to_xml(self):
s = '<?xml version="1.0" encoding="UTF-8"?>\n'
s += '<StreamingDistributionConfig xmlns="http://cloudfront.amazonaws.com/doc/2010-07-15/">\n'
s += ' <S3Origin>\n'
s += ' <DNSName>%s</DNSName>\n' % self.origin
if self.origin_access_identity:
val = self.origin_access_identity
s += ' <OriginAccessIdentity>origin-access-identity/cloudfront/%s</OriginAccessIdentity>\n' % val
s += ' </S3Origin>\n'
s += ' <CallerReference>%s</CallerReference>\n' % self.caller_reference
for cname in self.cnames:
s += ' <CNAME>%s</CNAME>\n' % cname
if self.comment:
s += ' <Comment>%s</Comment>\n' % self.comment
s += ' <Enabled>'
if self.enabled:
s += 'true'
else:
s += 'false'
s += '</Enabled>\n'
if self.trusted_signers:
s += '<TrustedSigners>\n'
for signer in self.trusted_signers:
if signer == 'Self':
s += ' <Self/>\n'
else:
s += ' <AwsAccountNumber>%s</AwsAccountNumber>\n' % signer
s += '</TrustedSigners>\n'
if self.logging:
s += '<Logging>\n'
s += ' <Bucket>%s</Bucket>\n' % self.logging.bucket
s += ' <Prefix>%s</Prefix>\n' % self.logging.prefix
s += '</Logging>\n'
s += '</StreamingDistributionConfig>\n'
return s
def create(self):
response = self.connection.make_request('POST',
'/%s/%s' % ("2010-11-01", "streaming-distribution"),
{'Content-Type' : 'text/xml'},
data=self.to_xml())
body = response.read()
if response.status == 201:
return body
else:
raise CloudFrontServerError(response.status, response.reason, body)
cf = boto.connect_cloudfront()
s3_dns_name = "stream.example.com.s3.amazonaws.com"
comment = "example streaming distribution"
oai = "<OAI ID from step 2 above like E23KRHS6GDUF5L>"
#Create a distribution that does NOT need signed URLS
hsd = HackedStreamingDistributionConfig(connection=cf, origin=s3_dns_name, comment=comment, enabled=True)
hsd.origin_access_identity = oai
basic_dist = hsd.create()
print("Distribution with basic URLs: %s" % get_domain_from_xml(basic_dist))
#Create a distribution that DOES need signed URLS
hsd = HackedStreamingDistributionConfig(connection=cf, origin=s3_dns_name, comment=comment, enabled=True)
hsd.origin_access_identity = oai
#Add some required signers (Self means your own account)
hsd.trusted_signers = ['Self']
signed_dist = hsd.create()
print("Distribution with signed URLs: %s" % get_domain_from_xml(signed_dist))
5 - Test that you can download objects from cloudfront but not from s3
You should now be able to verify:
stream.example.com.s3.amazonaws.com/video.mp4 - should give AccessDenied
signed_distribution.cloudfront.net/video.mp4 - should give MissingKey (because the URL is not signed)
basic_distribution.cloudfront.net/video.mp4 - should work fine
The tests will have to be adjusted to work with your stream player, but the basic idea is that only the basic cloudfront url should work.
6 - Create a keypair for CloudFront
I think the only way to do this is through Amazon's web site. Go into your AWS "Account" page and click on the "Security Credentials" link. Click on the "Key Pairs" tab then click "Create a New Key Pair". This will generate a new key pair for you and automatically download a private key file (pk-xxxxxxxxx.pem). Keep the key file safe and private. Also note down the "Key Pair ID" from amazon as we will need it in the next step.
7 - Generate some URLs in Python
As of boto version 2.0 there does not seem to be any support for generating signed CloudFront URLs. Python does not include RSA encryption routines in the standard library so we will have to use an additional library. I've used M2Crypto in this example.
For a non-streaming distribution, you must use the full cloudfront URL as the resource, however for streaming we only use the object name of the video file. See the code below for a full example of generating a URL which only lasts for 5 minutes.
This code is based loosely on the PHP example code provided by Amazon in the CloudFront documentation.
from M2Crypto import EVP
import base64
import time
def aws_url_base64_encode(msg):
msg_base64 = base64.b64encode(msg)
msg_base64 = msg_base64.replace('+', '-')
msg_base64 = msg_base64.replace('=', '_')
msg_base64 = msg_base64.replace('/', '~')
return msg_base64
def sign_string(message, priv_key_string):
key = EVP.load_key_string(priv_key_string)
key.reset_context(md='sha1')
key.sign_init()
key.sign_update(str(message))
signature = key.sign_final()
return signature
def create_url(url, encoded_signature, key_pair_id, expires):
signed_url = "%(url)s?Expires=%(expires)s&Signature=%(encoded_signature)s&Key-Pair-Id=%(key_pair_id)s" % {
'url':url,
'expires':expires,
'encoded_signature':encoded_signature,
'key_pair_id':key_pair_id,
}
return signed_url
def get_canned_policy_url(url, priv_key_string, key_pair_id, expires):
#we manually construct this policy string to ensure formatting matches signature
canned_policy = '{"Statement":[{"Resource":"%(url)s","Condition":{"DateLessThan":{"AWS:EpochTime":%(expires)s}}}]}' % {'url':url, 'expires':expires}
#now base64 encode it (must be URL safe)
encoded_policy = aws_url_base64_encode(canned_policy)
#sign the non-encoded policy
signature = sign_string(canned_policy, priv_key_string)
#now base64 encode the signature (URL safe as well)
encoded_signature = aws_url_base64_encode(signature)
#combine these into a full url
signed_url = create_url(url, encoded_signature, key_pair_id, expires);
return signed_url
def encode_query_param(resource):
enc = resource
enc = enc.replace('?', '%3F')
enc = enc.replace('=', '%3D')
enc = enc.replace('&', '%26')
return enc
#Set parameters for URL
key_pair_id = "APKAIAZCZRKVIO4BQ" #from the AWS accounts page
priv_key_file = "cloudfront-pk.pem" #your private keypair file
resource = 'video.mp4' #your resource (just object name for streaming videos)
expires = int(time.time()) + 300 #5 min
#Create the signed URL
priv_key_string = open(priv_key_file).read()
signed_url = get_canned_policy_url(resource, priv_key_string, key_pair_id, expires)
#Flash player doesn't like query params so encode them
enc_url = encode_query_param(signed_url)
print(enc_url)
8 - Try out the URLs
Hopefully you should now have a working URL which looks something like this:
video.mp4%3FExpires%3D1309979985%26Signature%3DMUNF7pw1689FhMeSN6JzQmWNVxcaIE9mk1x~KOudJky7anTuX0oAgL~1GW-ON6Zh5NFLBoocX3fUhmC9FusAHtJUzWyJVZLzYT9iLyoyfWMsm2ylCDBqpy5IynFbi8CUajd~CjYdxZBWpxTsPO3yIFNJI~R2AFpWx8qp3fs38Yw_%26Key-Pair-Id%3DAPKAIAZRKVIO4BQ
Put this into your js and you should have something which looks like this (from the PHP example in Amazon's CloudFront documentation):
var so_canned = new SWFObject('http://location.domname.com/~jvngkhow/player.swf','mpl','640','360','9');
so_canned.addParam('allowfullscreen','true');
so_canned.addParam('allowscriptaccess','always');
so_canned.addParam('wmode','opaque');
so_canned.addVariable('file','video.mp4%3FExpires%3D1309979985%26Signature%3DMUNF7pw1689FhMeSN6JzQmWNVxcaIE9mk1x~KOudJky7anTuX0oAgL~1GW-ON6Zh5NFLBoocX3fUhmC9FusAHtJUzWyJVZLzYT9iLyoyfWMsm2ylCDBqpy5IynFbi8CUajd~CjYdxZBWpxTsPO3yIFNJI~R2AFpWx8qp3fs38Yw_%26Key-Pair-Id%3DAPKAIAZRKVIO4BQ');
so_canned.addVariable('streamer','rtmp://s3nzpoyjpct.cloudfront.net/cfx/st');
so_canned.write('canned');
Summary
As you can see, not very easy! boto v2 will help a lot setting up the distribution. I will find out if it's possible to get some URL generation code in there as well to improve this great library!
In Python, what's the easiest way of generating an expiring URL for a file. I have boto installed but I don't see how to get a file from a streaming distribution.
You can generate a expiring signed-URL for the resource. Boto3 documentation has a nice example solution for that:
import datetime
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives import serialization
from cryptography.hazmat.primitives.asymmetric import padding
from botocore.signers import CloudFrontSigner
def rsa_signer(message):
with open('path/to/key.pem', 'rb') as key_file:
private_key = serialization.load_pem_private_key(
key_file.read(),
password=None,
backend=default_backend()
)
signer = private_key.signer(padding.PKCS1v15(), hashes.SHA1())
signer.update(message)
return signer.finalize()
key_id = 'AKIAIOSFODNN7EXAMPLE'
url = 'http://d2949o5mkkp72v.cloudfront.net/hello.txt'
expire_date = datetime.datetime(2017, 1, 1)
cloudfront_signer = CloudFrontSigner(key_id, rsa_signer)
# Create a signed url that will be valid until the specfic expiry date
# provided using a canned policy.
signed_url = cloudfront_signer.generate_presigned_url(
url, date_less_than=expire_date)
print(signed_url)