Best way to read aws credentials file - python

In my python code I need to extract AWS credentials
AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID
which are stored in the plain text file as described here:
https://docs.aws.amazon.com/sdkref/latest/guide/file-format.html
I know the name of the file: AWS_SHARED_CREDENTIALS_FILE
and the name of profile: AWS_PROFILE.
My current approach is to read and parse this file in python by myself to get AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID.
But I hope there is already standard way to get it using boto3 or some other library.
Please suggest.

Would something like this work for you, or am I misunderstanding the question? Basically start a session for the appropriate profile (or the default, I guess), and then query those values from the credentials object:
session = boto3.Session(profile_name=<...your-profile...>)
credentials = session.get_credentials()
print("AWS_ACCESS_KEY_ID = {}".format(credentials.access_key))
print("AWS_SECRET_ACCESS_KEY = {}".format(credentials.secret_key))
print("AWS_SESSION_TOKEN = {}".format(credentials.token))

As far as I understand, the AWS credentials file uses a standard INI file format. You can utilize configparser to parse the file easily. Please refer to: https://docs.python.org/3/library/configparser.html.
For boto3, if you put it in standard areas, it will load automagically.
Boto3 will look in several locations when searching for credentials.
The mechanism in which Boto3 looks for credentials is to search
through a list of possible locations and stop as soon as it finds
credentials. The order in which Boto3 searches for credentials is:
Passing credentials as parameters in the boto.client() method Passing
credentials as parameters when creating a Session object
Environment variables
Shared credential file (~/.aws/credentials)
AWS config file (~/.aws/config)
Assume Role provider
Boto2 config file (/etc/boto.cfg and ~/.boto)
Instance metadata service on an Amazon EC2 instance that has an IAM role configured.
Reference: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html

Related

S3 URL - Download with python

I need to download a file from this URL https://desafio-rkd.s3.amazonaws.com/disney_plus_titles.csv with Python, try to do it with "" require.get '", but it returns me denied access. I understand that I have to authenticate. I have the key and the secret key, but I do not know how to do it.
Help me please?
The preferred way would be to use the boto3 library for Amazon S3. It has a download_file() command, for which you would use:
import boto3
s3_client = boto3.client('s3')
s3_client.download_file('desafio-rkd', 'disney_plus_titles.csv', 'disney_plus_titles.csv')
The parameters are: Bucket, Key, local filename to use when saving the file
Also, you will need to provide an Access Key and Secret Key. The preferred way to do this is to store them in a credentials file. This can be done by using the AWS Command-Line Interface (CLI) aws configure command.
See: Credentials — Boto3 documentation

Using Boto3 to get AWS configuration option

I am looking for a way to perform the equivalent of the AWS CLI's method aws configure get varname [--profile profile-name] using boto3 in python. Does anyone know if this possible without either:
Parsing the AWS config file myself
Somehow interacting with the AWS CLI itself from my python script
For more context, I am writing a python cli tool that will interact with AWS APIs using boto3. The python tool uses an AWS session token stored in a profile in the ~/.aws/credentials file. I am using the saml2aws cli to fetch AWS credentials from my company's identity provider, which writes the aws_access_key_id, aws_secret_access_key, aws_session_token, aws_security_token, x_principal_arn, and x_security_token_expires parameters to the ~/.aws/credentials file like so:
[saml]
aws_access_key_id = #REMOVED#
aws_secret_access_key = #REMOVED#
aws_session_token = #REMOVED#
aws_security_token = #REMOVED#
x_principal_arn = arn:aws:sts::000000000123:assumed-role/MyAssumedRole
x_security_token_expires = 2019-08-19T15:00:56-06:00
By the nature of my python cli tool, sometimes the tool will execute past the expiration time of the AWS session token, which are enforced to be quite short by my company. I want the python cli tool to check the expiration time before it starts its critical task to verify that it has enough time to complete its task, and if not, alerting the user to refresh their session token.
Using the AWS CLI, I can fetch the expiration time of the AWS session token from the ~/.aws/credentials file using like this:
$ aws configure get x_security_token_expires --profile saml
2019-08-19T15:00:56-06:00
and I am curious if boto3 has a mechanism I was unable to find to do something similar.
As an alternate solution, given an already generated AWS session token, is it possible to fetch the expiration time of it? However, given the lack of answers on questions such as Ways to find out how soon the AWS session expires?, I would guess not.
Since the official AWS CLI is powered by boto3, I was able to dig into the source to find out how aws configure get is implemented. It's possible to read the profile configuration through the botocore Session object. Here is some code to get the config profile and value used in your example:
import botocore.session
# Create an empty botocore session directly
session = botocore.session.Session()
# Get config of desired profile. full_config is a standard python dictionary.
profiles_config = session.full_config.get("profiles", {})
saml_config = profiles_config.get("saml", {})
# Get config value. This will be None if the setting doesn't exist.
saml_security_token_expires = saml_config.get("x_security_token_expires")
I'm using code similar to the above as part of a transparent session cache. It checks for a profile's role_arn so I can identify a cached session to load if one exists and hasn't expired.
As far as the alternate question of knowing how long a given session has before expiring, you are correct in that there is currently no API call that can tell you this. Session expiration is only given when the session is created, either through STS get_session_token or assume_role API calls. You have to hold onto the expiration info yourself after that.

Authenticating a google.storage.Client without saving the service account JSON to disk

For authentication of a Google Cloud Platform storage client, I'd like to NOT write the service account JSON (credentials file that you create) to disk. I would like to keep them purely in memory after loading them from a Hashicorp Vault keystore that is shared by all cloud instances. Is there a way to pass the JSON credentials directly, rather than passing a pathlike/file object?
I understand how to do this using a pathlike/file object as follows, but this is what I want to avoid (due to security issues, I'd prefer to never write them to disk):
from google.cloud import storage
# set an environment variable that relies on a JSON file
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service_account.json"
# create the client (assumes the environment variable is set)
client = storage.Client()
# alternately, one can create the client without the environment
# variable, but this still relies on a JSON file.
client = storage.Client.from_service_account_json("/path/to/service_account.json")
I have tried to get around this by referencing the JSON DATA (json_data) directly, but this throws the error: TypeError: expected str, bytes or os.PathLike object, not dict
json_data = {....[JSON]....}
client = storage.Client.from_service_account_json(json_data)
Also, dumping to JSON, but I get the error:
with io.open(json_credentials_path, "r", encoding="utf-8") as json_fi:
OSError: [Errno 63] File name too long: '{"type": "service_account", "project_id",......
json_data = {....[JSON]....}
client = storage.Client.from_service_account_json(json.dumps(json_data))
Per the suggestion from #johnhanley, I have also tried:
from google.cloud import storage
from google.oauth2 import service_account
json_data = {...data loaded from keystore...}
type(json_data)
dict
credentials = service_account.Credentials.from_service_account_info(json_data)
type(credentials)
google.oauth2.service_account.Credentials
client = storage.Client(credentials=credentials)
This resulted in the DefaultCredentialsError:
raise exceptions.DefaultCredentialsError(_HELP_MESSAGE)
google.auth.exceptions.DefaultCredentialsError: Could not automatically determine credentials. Please set GOOGLE_APPLICATION_CREDENTIALS or explicitly create credentials and re-run the application. For more information, please see https://developers.google.com/accounts/docs/application-default-credentials.
If you have ideas on how to solve this, I'd love to hear it!
Currently there aren't built-in methods in the client library for Cloud Storage to achieve this. So here there would be 2 possibilites:
As #JohnHanley stated use the provided built-in methods [1][2] to create the constructor for the Cloud Storage client.
You might consider as well to use another product such as Cloud Functions or App Engine that would allow you to configure the authentication at the service level and avoid providing the service account credentials.

Automatic handling of session token with boto3 and MFA

I have my .aws/credentials set as
[default]
aws_access_key_id = [key]
aws_secret_access_key = [secret! Shh!]
and .aws/config
[profile elevated]
role_arn = [elevated role arn]
source_profile = default
mfa_serial = [my device arn]
With the credentials and config files set up like that, boto3 will
automatically make the corresponding AssumeRole calls to AWS STS on your behalf. It will handle in
memory caching as well as refreshing credentials as needed
so that when I use something like
session = boto3.Session(profile_name = "elevated")
in a longer function, all I have to do is input my MFA code immediately after hitting "enter" and everything runs and credentials are managed independent of my input. This is great. I like that when I need to assume a role in another AWS account, boto3 handles all of the calls to sts and all I have to do is babysit.
What about when I don't want to assume another role? If I want to do things directly as my user as a member of the group to which my user is assigned? Is there a way to let boto3 automatically handle the credentials aspect of that?
I see that I can hard-code into a fx my aws_access_key_id and ..._secret_... , but is there a way to force boto3 into handling the session tokens by just using the config and credentials files?
Method 2 in this answer looked promising but it also seems to rely on using the AWS CLI to input and store the keys/session token prior to running a Python script and still requires hard-coding variables into a CLI.
Is there a way to make this automatic by using the config and credentials files that doesn't require having to manually input AWS access keys and handle session tokens?
If you are running the application on EC2, you can attach roles via EC2 Roles.
On your code, you may dynamically get the credentials depending on which role you attach.
session = boto3.
credentials = session.get_credentials().get_frozen_credentials()
access_key = credentials.access_key
secret_key = credentials.secret_key
token = credentials.token
you may also want to use botocore.credentials.RefreshableCredentials to refresh your token once in a while

Use Python Google Storage Client without credentials

I am using the Python Google Storage Client, however I am using a bucket with public read/write access. (I know this is usually a terrible idea but I have a rare use case where it is fine).
When I try to retrieve some files, I get a DefaultCredentialsError.
BUCKET_NAME = 'my-public-bucket-name'
storage_client = storage.Client()
bucket = storage_client.get_bucket(BUCKET_NAME)
def list_blobs(prefix, delimiter=None):
blobs = bucket.list_blobs(prefix=prefix, delimiter=delimiter)
print('Blobs:')
for blob in blobs:
print(blob.name)
The specific error reads:
google.auth.exceptions.DefaultCredentialsError: Could not automatically determine credentials. Please set GOOGLE_APPLICATION_CREDENTIALS or explicitly create credentials and re-run the application. For more information, please see https://cloud.google.com/docs/authentication/getting-started
That page suggests using Oath or other tokens, but I shouldn't need these since my bucket is public? I can make an HTTP request to the bucket in chrome and receive data.
How should I get around this issue? Can I provide default or null credentials?
The default for a storage client with no parameters is to use environment credentials (e.g. authenticate with the gcloud tools first). If you want to use a client with no credentials you have to use
the create_anonymous_client method, which lets you access resources available to allUsers.
Be careful though which APIs you use, not all of them support anonymous credentials. E.g. instead of client.get_bucket('my-bucket') you have to use client.bucket(bucket_name='my-bucket').
Also note that it seems any permissions error returns a generic ValueError: Anonymous credentials cannot be refreshed.. E.g. if you try to overwrite an existing file while only having read/write permissions.
So a full example of uploading a file to a publicly accessible bucket is
from google.cloud import storage
client = storage.Client.create_anonymous_client()
bucket = client.bucket(bucket_name='my-public-bucket')
blob = bucket.blob('my-file')
blob.upload_from_filename('my-local-file')
From "Cloud Storage Authentication":
Most of the operations you perform in Cloud Storage must be authenticated. The only exceptions are operations on objects that allow anonymous access. Objects are anonymously accessible if the allUsers group has READ permission. The allUsers group includes anyone on the Internet.

Categories

Resources