I'm just starting exploring IAM Roles. So far I launched an instance, created an IAM Role. Everything seems to work as expected. Currently I'm using boto (Python sdk).
What I don't understand :
Does the boto takes care of credential rotation? (For example, imagine I have an instance that should be up for a long time, and it constantly have to upload keys to s3 bucket. In case if credentials are expired, do I need to 'catch' an exception and reconnect? or boto will silently do this for me?)
Is it possible to manually trigger IAM to change credentials on the Role? (I want to do this, because I want to test above example. Or if there is there an alternative to this testcase? )
The boto library does handle credential rotation. Or, rather, AWS rotates the credentials and boto automatically picks up the new credentials. Currently, boto does this by checking the expiration timestamp of the temporary credentials. If the expiration is within 5 minutes of the current time, it will query the metadata service on the instance for the IAM role credentials. The service is responsible for rotating the credentials.
I'm not aware of a way to force the service to rotate the credentials but you could probably force boto to look for updated credentials by manually adjusting the expiration timestamp of the current credentials.
Related
I am looking for a way to perform the equivalent of the AWS CLI's method aws configure get varname [--profile profile-name] using boto3 in python. Does anyone know if this possible without either:
Parsing the AWS config file myself
Somehow interacting with the AWS CLI itself from my python script
For more context, I am writing a python cli tool that will interact with AWS APIs using boto3. The python tool uses an AWS session token stored in a profile in the ~/.aws/credentials file. I am using the saml2aws cli to fetch AWS credentials from my company's identity provider, which writes the aws_access_key_id, aws_secret_access_key, aws_session_token, aws_security_token, x_principal_arn, and x_security_token_expires parameters to the ~/.aws/credentials file like so:
[saml]
aws_access_key_id = #REMOVED#
aws_secret_access_key = #REMOVED#
aws_session_token = #REMOVED#
aws_security_token = #REMOVED#
x_principal_arn = arn:aws:sts::000000000123:assumed-role/MyAssumedRole
x_security_token_expires = 2019-08-19T15:00:56-06:00
By the nature of my python cli tool, sometimes the tool will execute past the expiration time of the AWS session token, which are enforced to be quite short by my company. I want the python cli tool to check the expiration time before it starts its critical task to verify that it has enough time to complete its task, and if not, alerting the user to refresh their session token.
Using the AWS CLI, I can fetch the expiration time of the AWS session token from the ~/.aws/credentials file using like this:
$ aws configure get x_security_token_expires --profile saml
2019-08-19T15:00:56-06:00
and I am curious if boto3 has a mechanism I was unable to find to do something similar.
As an alternate solution, given an already generated AWS session token, is it possible to fetch the expiration time of it? However, given the lack of answers on questions such as Ways to find out how soon the AWS session expires?, I would guess not.
Since the official AWS CLI is powered by boto3, I was able to dig into the source to find out how aws configure get is implemented. It's possible to read the profile configuration through the botocore Session object. Here is some code to get the config profile and value used in your example:
import botocore.session
# Create an empty botocore session directly
session = botocore.session.Session()
# Get config of desired profile. full_config is a standard python dictionary.
profiles_config = session.full_config.get("profiles", {})
saml_config = profiles_config.get("saml", {})
# Get config value. This will be None if the setting doesn't exist.
saml_security_token_expires = saml_config.get("x_security_token_expires")
I'm using code similar to the above as part of a transparent session cache. It checks for a profile's role_arn so I can identify a cached session to load if one exists and hasn't expired.
As far as the alternate question of knowing how long a given session has before expiring, you are correct in that there is currently no API call that can tell you this. Session expiration is only given when the session is created, either through STS get_session_token or assume_role API calls. You have to hold onto the expiration info yourself after that.
I have created an Azure function which is trigered when a new file is added to my Blob Storage. This part works well !
BUT, now I would like to start the "Speech-To-Text" Azure service using the API. So I try to create my URI leading to my new blob and then add it to the API call. To do so I created an SAS Token (From Azure Portal) and I add it to my new Blob Path .
https://myblobstorage...../my/new/blob.wav?[SAS Token generated]
By doing so I get an error which says :
Authentification failed Invalid URI
What am I missing here ?
N.B : When I generate manually the SAS token from the "Azure Storage Explorer" everything is working well. Plus my token is not expired in my test
Thank you for your help !
You might generate the SAS token with wrong authentication.
Make sure the Object option is checked.
Here is the reason in docs:
Service (s): Access to service-level APIs (e.g., Get/Set Service Properties, Get Service Stats, List Containers/Queues/Tables/Shares)
Container (c): Access to container-level APIs (e.g., Create/Delete Container, Create/Delete Queue, Create/Delete Table, Create/Delete
Share, List Blobs/Files and Directories)
Object (o): Access to object-level APIs for blobs, queue messages, table entities, and files(e.g. Put Blob, Query Entity, Get Messages,
Create File, etc.)
I'm developing a Cloud Run Service that accesses different Google APIs using a service account's secrets file with the following python 3 code:
from google.oauth2 import service_account
credentials = service_account.Credentials.from_service_account_file(SECRETS_FILE_PATH, scopes=SCOPES)
In order to deploy it, I upload the secrets file during the build/deploy process (via gcloud builds submit and gcloud run deploy commands).
How can I avoid uploading the secrets file like this?
Edit 1:
I think it is important to note that I need to impersonate user accounts from GSuite/Workspace (with domain wide delegation). The way I deal with this is by using the above credentials followed by:
delegated_credentials = credentials.with_subject(USER_EMAIL)
Using the Secret Manager might help you, as you can manage the multiple secrets you have and not have them stored as files, as you are doing right now. I would recommend you to take a look at this article here, so you can get more information on how to use it with Cloud Run, to improve the way you manage your secrets.
In addition to that, as clarified in this similar case here, you have two options: use default service account that comes with it or deploy another one with the Service Admin role. This way, you won't need to specify keys with variables - as clarified by a Google developer in this specific answer.
To improve the security, the best way is to never use service account key file, locally or on GCP (I wrote an article on this). To achieve this, Google Cloud service have an automatically loaded service account, either this one by default or, when possible, a custom one.
On Cloud Run, the default service account is the Compute Engine default service account (I recommend you to never use it, it has editor role on the project, it's too wide!), or you can specify the service account to use (--service-account= parameter)
Then, in your code, simply use the ADC mechanism (Application Default Credential) to get your credentials, like this in Python
import google.auth
credentials, project_id = google.auth.default(scopes=SCOPES)
I've found one way to solve the problem.
First, as suggested by guillaume blaquiere answer, I used google.auth ADC mechanism:
import google.auth
credentials, project_id = google.auth.default(scopes=SCOPES)
However, as I need to impersonate GSuite's (now Workspace) accounts, this method is not enough, as the credentials object generated from this method does not have the with_subject property. This led me to this similar post and specific answer which works a way to convert google.auth.credentials into the Credential object returned by service_account.Credentials.from_service_account_file. There was one problem with his solution, as it seemed that an authentication scope was missing.
All I had to do is add the https://www.googleapis.com/auth/cloud-platform scope to the following places:
The SCOPES variable in the code
Google Admin > Security > API Controls > Set client ID and scope for the service account I am deploying with
At the OAuth Consent Screen of my project
After that, my Cloud Run had access to credentials that were able to impersonate user's accounts without using key files.
I have my .aws/credentials set as
[default]
aws_access_key_id = [key]
aws_secret_access_key = [secret! Shh!]
and .aws/config
[profile elevated]
role_arn = [elevated role arn]
source_profile = default
mfa_serial = [my device arn]
With the credentials and config files set up like that, boto3 will
automatically make the corresponding AssumeRole calls to AWS STS on your behalf. It will handle in
memory caching as well as refreshing credentials as needed
so that when I use something like
session = boto3.Session(profile_name = "elevated")
in a longer function, all I have to do is input my MFA code immediately after hitting "enter" and everything runs and credentials are managed independent of my input. This is great. I like that when I need to assume a role in another AWS account, boto3 handles all of the calls to sts and all I have to do is babysit.
What about when I don't want to assume another role? If I want to do things directly as my user as a member of the group to which my user is assigned? Is there a way to let boto3 automatically handle the credentials aspect of that?
I see that I can hard-code into a fx my aws_access_key_id and ..._secret_... , but is there a way to force boto3 into handling the session tokens by just using the config and credentials files?
Method 2 in this answer looked promising but it also seems to rely on using the AWS CLI to input and store the keys/session token prior to running a Python script and still requires hard-coding variables into a CLI.
Is there a way to make this automatic by using the config and credentials files that doesn't require having to manually input AWS access keys and handle session tokens?
If you are running the application on EC2, you can attach roles via EC2 Roles.
On your code, you may dynamically get the credentials depending on which role you attach.
session = boto3.
credentials = session.get_credentials().get_frozen_credentials()
access_key = credentials.access_key
secret_key = credentials.secret_key
token = credentials.token
you may also want to use botocore.credentials.RefreshableCredentials to refresh your token once in a while
I would like to know about a more efficient way than renewing sts role for a cross account role when it run on lambda. By definition those roles last for 1h per default, but so far i'm doing it this way:
def aws_session(role_arn, session_name):
_ = boto3.client('sts')
resp = _.assume_role(RoleArn=role_arn, RoleSessionName=session_name)
session = boto3.Session(
aws_access_key_id=response['Credentials']['AccessKeyId'],
aws_secret_access_key=response['Credentials']['SecretAccessKey'],
aws_session_token=response['Credentials']['SessionToken'],
region_name='us-east-1')
return session
def lambda_handler(event, context):
session = aws_session(role_arn=ARN, session_name='CrossAccountLambdaRole')
s3_sts = session.resource('s3')
But it terribly inefficient because instead of ~300ms, renewing credentials take more than ~1500 ms each time and as we all know, we are charged on the duration execution. Anyone could help me on how to refresh this only when the token expire ? Coz between execution, we are not sure to endup using the same "container", so how to make global variable?
Thx a lot
Remove AssumeRole
I think your problem stems from the fact that your code is picking the role it needs on each run. Your assume role code should indeed be generating a new token on each call. I'm not familiar with the Python Boto library but in Node I only call AssumeRole when I'm testing locally and want to pull down new credentials, I save those credentials and never call assume role again until I want new creds. Every time I call assume role, I get new credentials as expected. You don't need STS directly to run your lambda functions.
An Alternate Approach:
For the production application my Lambda code does not pick its role. The automation scripts that build the Lambda function assign it a role and the lambda function will use that role for ever, with AWS managing the refresh of credentials on the back-end as they expire. You can do this by building your Lambda function in CloudFormation specifying what role you want it to use.
Lambda via CloudFormation
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-function.html
If you then want to view what credentials your function is operating with you can print out environment variables. Lambda will pass in temporary credentials to your function and the credentials will be associated with the role you've defined.
Simpler Approach
If you don't want to deal with CloudFormation deploy your function manually into the AWS console and in the console specify the role it should run with. But the bottom line is you don't need to use STS inside your lambda code. Assign the role externally.
Since your going across accounts, you obviously can't follow the advice many people say of attaching directly to the lambda.
Your best option is parameter store which is covered in detail here:
https://aws.amazon.com/blogs/compute/sharing-secrets-with-aws-lambda-using-aws-systems-manager-parameter-store/
Simply have lambda request the credentials from there instead.
That said, it's probably not going to save much time compared to STS requests... But I've not timed either process.
A perhaps less-good way, that's fairly simple, is to store the credentials in /TMP and build a process around enduring the credentials remain valid -- perhaps assume role with 65 minute duration, and save to a time stamped file with the minutes/seconds dropped. If the file exists, read it in by file I/O.
Keep in mind your saving credentials in a way that can be compromised if your code allows access to read the file in some way... Though as a lambda and with shared responsibility security, it's reasonably secure compared to doing this strategy on a persistent server.
Always use least privilege roles. Only allow your trusted account to assume this role... I think your can even lock trust policies down to a specific incoming lambda role as allowed to assume role in. This way leaked credentials by somehow reading/outputting the file require a malicious user to compromise some other aspect of your account (if locked down by account number only), or execute remote code execution inside your lambda itself (if locked to lambda).... Though, at that point, your credentials are already available to the malicious user to use anyways.