I have been provided to access aws and get credentials manaully i,e to copy access_key_id,secret_access_key,session_token this will expire every one hour. I am using these credentials to extract information from Route53. I want to automate and get access_key_id,secret_access_key,session_token instead of manually copying to the script. I would like to understand is there any way to do this automation..
The process of refreshing tokens are documented here:
Auto-refresh AWS Tokens Using IAM Role and boto3
It is poorly documented in boto documentation but this could help
Related
data source is from SaaS Server's API endpoints, aim to use python to move data into AWS S3 Bucket(Python's Boto3 lib)
API is assigned via authorized Username/password combination and unique api-key.
then every time initially call API need get Token for further info fetch.
have 2 question:
how to manage those secrets above, save to a head file (*.ini, *.json *.yaml) or saved via AWS's Secret-Manager?
the Token is a bit challenging, the basically way is each Endpoint, fetch a new token and do the API call
then that's end of too many pipeline (like if 100 Endpoints info need per downstream business needs) then
need to craft 100 pipeline like an universal template repeating 100 times.
I am new to Python programing world, you all feel free to comment to share any user-case.
Much appreciate !!
I searched and read this show-case
[saving-from-api-to-s3-bucket/74648533]
saving from api to s3 bucket
and
"how-to-write-a-file-or-data-to-an-s3-object-using-boto3"
How to write a file or data to an S3 object using boto3
I found this has been helpful:
#Python-decopule summary: store parameters in .ini or .env files;
#few options of manage(hiding) sensitive info
a. IAM role
b. Store Secrets using **Parameter Store**
c. Store Secrets using **Secrets Manager** - Current method
recommended by AWS
For my current Python project I' using the Microsoft Azure SDK for Python.
I want to copy a specific blob from one container path to another and tested already some options, described here.
Overall they are basically "working", but unfortunately the new_blob.start_copy_from_url(source_blob_url) command always leads to an erorr: ErrorCode:CannotVerifyCopySource.
Is someone getting the same error message here, or has an idea, how to solve it?
I was also trying to modify the source_blob_url as a sas-token, but still doesn't work. I have the feeling that there is some connection to the access levels of the storage account, but so far I wasn't able to figure it out. Hopefully someone here can help me.
Is someone getting the same error message here, or has an idea, how to solve it?
As you have mentioned you might be receiving this error due to permissions while including the SAS Token.
The difference to my code was, that I used the blob storage sas_token from the Azure website, instead of generating it directly for the blob client with the azure function.
In order to allow access to certain areas of your storage account, a SAS is generated by default with a number of permissions such as read/write, services, resource type, Start and expiration date/time, and Allowed IP addresses, etc.
It's not that you always need to generate directly for the blob client with the azure function but you can generate one from the portal too by allowing the permissions.
REFERENCES: Grant limited access to Azure Storage resources using SAS - MSFT Document
I am looking for a way to perform the equivalent of the AWS CLI's method aws configure get varname [--profile profile-name] using boto3 in python. Does anyone know if this possible without either:
Parsing the AWS config file myself
Somehow interacting with the AWS CLI itself from my python script
For more context, I am writing a python cli tool that will interact with AWS APIs using boto3. The python tool uses an AWS session token stored in a profile in the ~/.aws/credentials file. I am using the saml2aws cli to fetch AWS credentials from my company's identity provider, which writes the aws_access_key_id, aws_secret_access_key, aws_session_token, aws_security_token, x_principal_arn, and x_security_token_expires parameters to the ~/.aws/credentials file like so:
[saml]
aws_access_key_id = #REMOVED#
aws_secret_access_key = #REMOVED#
aws_session_token = #REMOVED#
aws_security_token = #REMOVED#
x_principal_arn = arn:aws:sts::000000000123:assumed-role/MyAssumedRole
x_security_token_expires = 2019-08-19T15:00:56-06:00
By the nature of my python cli tool, sometimes the tool will execute past the expiration time of the AWS session token, which are enforced to be quite short by my company. I want the python cli tool to check the expiration time before it starts its critical task to verify that it has enough time to complete its task, and if not, alerting the user to refresh their session token.
Using the AWS CLI, I can fetch the expiration time of the AWS session token from the ~/.aws/credentials file using like this:
$ aws configure get x_security_token_expires --profile saml
2019-08-19T15:00:56-06:00
and I am curious if boto3 has a mechanism I was unable to find to do something similar.
As an alternate solution, given an already generated AWS session token, is it possible to fetch the expiration time of it? However, given the lack of answers on questions such as Ways to find out how soon the AWS session expires?, I would guess not.
Since the official AWS CLI is powered by boto3, I was able to dig into the source to find out how aws configure get is implemented. It's possible to read the profile configuration through the botocore Session object. Here is some code to get the config profile and value used in your example:
import botocore.session
# Create an empty botocore session directly
session = botocore.session.Session()
# Get config of desired profile. full_config is a standard python dictionary.
profiles_config = session.full_config.get("profiles", {})
saml_config = profiles_config.get("saml", {})
# Get config value. This will be None if the setting doesn't exist.
saml_security_token_expires = saml_config.get("x_security_token_expires")
I'm using code similar to the above as part of a transparent session cache. It checks for a profile's role_arn so I can identify a cached session to load if one exists and hasn't expired.
As far as the alternate question of knowing how long a given session has before expiring, you are correct in that there is currently no API call that can tell you this. Session expiration is only given when the session is created, either through STS get_session_token or assume_role API calls. You have to hold onto the expiration info yourself after that.
According to the Amazon WorkDocs SDK page, you can use Boto3 to migrate your content to Amazon WorkDocs. I found the entry for the WorkSpaces Client in the Boto3 documentation, but every call seems to require a "AuthenticationToken" parameter. The only information I can find on AuthenticationToken is that is it supposed to be a "Amazon WorkDocs authentication token".
Does anyone know what this token is? How do I get one? Is there any code examples of using the WorkDocs Client in Boto3?
I am trying to create a simple Python script that will upload a single document into WorkDocs, but there seems to be little to no information on how to do this. I was easily able to write a script that can upload/download files from S3, but this seems like something else entirely.
I need to upload new device tokens to AWS SNS, and would rather doing it in batches instead of one token at a time.
According to the AWS documentation this is supported by their API, and an example is given for the Java SDK using a "bulkupload package".
The problem is that I wrote everything in Python and I can't find any reference to this feature in the Boto3 documentation.
Do you know of a way to do this in Python (not necessarily using Boto)? Or am I doomed to either uploading tokens one by one or rewrite everything in Java?
Thanks!