I am using Python boto 3 to get info on one of my AWS account. Can I automate it such a way that I can get the same info from all accounts? I know the boto 3 lib reads creds from the credentials file but can I have multiple credentials read?
Boto3 can read credentials in many different ways.
One option is to have a credential file with a profile for each of your accounts (as mentioned in the comment by #jordanm) like this (examples from the AWS documentation):
[default]
aws_access_key_id=foo
aws_secret_access_key=bar
[dev]
aws_access_key_id=foo2
aws_secret_access_key=bar2
[prod]
aws_access_key_id=foo3
aws_secret_access_key=bar3
And select the profile you want in the code, like this (again, from the AWS documentation):
session = boto3.Session(profile_name='dev')
# Any clients created from this session will use credentials
# from the [dev] section of ~/.aws/credentials.
dev_s3_client = session.client('s3')
Now you just need to loop through all your profiles.
Related
I have my config file set up with multiple profiles and I am trying to assume an IAM role, but all the articles I see about assuming roles are starting with making an sts client using
import boto3 client = boto3.client('sts')
which makes sense but the only problem is, It gives me an error when I try to do it like this. but when I do it like this, while passing a profile that exists in my config file, it works. here is the code below:
import boto3 session = boto3.Session(profile_name="test_profile")
sts = session.client("sts")
response = sts.assume_role(
RoleArn="arn:aws:iam::xxx:role/role-name",
RoleSessionName="test-session"
)
new_session = Session(aws_access_key_id=response['Credentials']['AccessKeyId'], aws_secret_access_key=response['Credentials']['SecretAccessKey'], aws_session_token=response['Credentials']['SessionToken'])
when other people are assuming roles in their codes without passing a profile in, how does that even work? does boto3 automatically grabs the default profile from the config file or something like that in their case?
Yes. This line:
sts = session.client("sts")
tells boto3 to create a session using the default credentials.
The credentials can be provided in the ~/.aws/credentials file. If the code is running on an Amazon EC2 instance, boto3 will automatically use credentials associated with the IAM Role associated with the instance.
Credentials can also be passed via Environment Variables.
See: Credentials — Boto3 documentation
I am trying to write a script in Python to grab new emails from a specific folder and save the attachments to a shared drive to upload to a database. Power Automate would work, but the file size limit to save the attachment is a meager 20 MB. I am able to authenticate the token, but am getting the following error when trying to grab the emails:
Unauthorized for url.
The token contains no permissions, or permissions can not be understood.
I have included the code I am using to connect to Microsoft Graph.
(credentials and tenant_id are correct in my code, took them out for obvious reasons
from O365 import Account, MSOffice365Protocol, MSGraphProtocol
credentials = ('xxxxxx', 'xxxxxx')
protocol = MSGraphProtocol(default_resource='reporting.triometric#xxxx.com')
scopes_graph = protocol.get_scopes_for('message_all_shared')
scopes = ['https://graph.microsoft.com/.default']
account = Account(credentials, auth_flow_type='credentials', tenant_id="**", scopes=scopes,)
if account.authenticate():
print('Authenticated')
mailbox = account.mailbox(resource='reporting.triometric#xxxx.com')
inbox = mailbox.inbox_folder()
for message in inbox.get_messages():
print(message)
I have already configured the permissions through Azure to include all the necessary 'mail' delegations.
The rest of my script works perfectly fine for uploading files to the database. Currently, the attachments must be manually saved on a shared drive multiple times per day, then the script is run to upload. Are there any steps I am missing? Any insights would be greatly appreciated!
Here are the permissions:
auth_flow_type='credentials' means you are using client credentials flow.
In this case you should add Application permissions rather than Delegated permissions.
Don't forget to click on "Grant admin consent for {your tenant}".
UPDATE:
If you set auth_flow_type to 'Authorization', it will use auth code flow which requires the delegated permission.
I am struggling to programmatically access a kubernetes cluster running on Google Cloud. I have set up a service account and pointed GOOGLE_APPLICATION_CREDENTIALS to a corresponding credentials file. I managed to get the cluster and credentials as follows:
import google.auth
from google.cloud.container_v1 import ClusterManagerClient
from kubernetes import client
credentials, project = google.auth.default(
scopes=['https://www.googleapis.com/auth/cloud-platform',])
credentials.refresh(google.auth.transport.requests.Request())
cluster_manager = ClusterManagerClient(credentials=credentials)
cluster = cluster_manager.get_cluster(project, 'us-west1-b', 'clic-cluster')
So far so good. But then I want to start using the kubernetes client:
config = client.Configuration()
config.host = f'https://{cluster.endpoint}:443'
config.verify_ssl = False
config.api_key = {"authorization": "Bearer " + credentials.token}
config.username = credentials._service_account_email
client.Configuration.set_default(config)
kub = client.CoreV1Api()
print(kub.list_pod_for_all_namespaces(watch=False))
And I get an error message like this:
pods is forbidden: User "12341234123451234567" cannot list resource "pods" in API group "" at the cluster scope: Required "container.pods.list" permission.
I found this website describing the container.pods.list, but I don't know where I should add it, or how it relates to the API scopes described here.
As per the error:
pods is forbidden: User "12341234123451234567" cannot list resource
"pods" in API group "" at the cluster scope: Required
"container.pods.list" permission.
it seems evident the user credentials you are trying to use, does not have permission on listing the pods.
The entire list of permissions mentioned in https://cloud.google.com/kubernetes-engine/docs/how-to/iam, states the following:
There are different Role which can play into account here:
If you are able to get cluster, then it is covered with multiple Role sections like: Kubernetes Engine Cluster Admin, Kubernetes Engine Cluster Viewer, Kubernetes Engine Developer & Kubernetes Engine Viewer
Whereas, if you want to list pods kub.list_pod_for_all_namespaces(watch=False) then you might need Kubernetes Engine Viewer access.
You should be able to add multiple roles.
I am using python and boto3 to list resource that my organization has. I am listing the resources from my master account without a problem but I also need to list the resource from the child accounts as well. I can get the child account ID's but that's pretty much it.
Any help?
You will need access to a set of credentials that belong to the child account.
From Accessing and Administering the Member Accounts in Your Organization - AWS Organizations:
When you create a member account using the AWS Organizations console, AWS Organizations automatically creates an IAM role in the account. This role has full administrative permissions in the member account. The role is also configured to grant that access to the organization's master account.
To use this role to access the member account, you must sign in as a user from the master account that has permissions to assume the role.
So, you can assume the IAM Role in the child account, which then provides a set of temporary credentials that can be used with boto3 to make API calls to the child account.
import boto3
role_info = {
'RoleArn': 'arn:aws:iam::<AWS_ACCOUNT_NUMBER>:role/<AWS_ROLE_NAME>',
'RoleSessionName': '<SOME_SESSION_NAME>'
}
client = boto3.client('sts')
credentials = client.assume_role(**role_info)
session = boto3.session.Session(
aws_access_key_id=credentials['Credentials']['AccessKeyId'],
aws_secret_access_key=credentials['Credentials']['SecretAccessKey'],
aws_session_token=credentials['Credentials']['SessionToken']
)
An easier way is to put the role in your .aws/config file as a new profile. Then, you can specify a profile when making function calls:
# In ~/.aws/credentials:
[master]
aws_access_key_id=foo
aws_secret_access_key=bar
# In ~/.aws/config
[profile child1]
role_arn=arn:aws:iam:...
source_profile=master
Use it like this:
session = boto3.session.Session(profile_name='dev')
s3 = session.client('s3')
See: How to choose an AWS profile when using boto3 to connect to CloudFront
I am trying to upload a folder in my local machine to google cloud bucket. I get an error with the credentials. Where should I be providing the credentials and what all information is needed in it.
from_dest = '/Users/xyzDocuments/tmp'
gsutil_link = 'gs://bucket-1991'
from google.cloud import storage
try:
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob(destination_blob_name)
blob.upload_from_filename(source_file_name)
print('File {} uploaded to {}.'.format(source_file_name,destination_blob_name))
except Exception as e:
print e
The error is
could not automatically determine credentials. Please set GOOGLE_APPLICATION_CREDENTIALS or explicitly create credentials and re-run the application. For more information, please see https://developers.google.com/accounts/do`cs/application-default-credentials.
You need to acquire the application default credentials for your project and set them as an environmental variable:
Go to the Create service account key page in the GCP Console.
From the Service account drop-down list, select New service account.
Enter a name into the Service account name field.
From the Role drop-down list, select Project > Owner.
Click Create. A JSON file that contains your key downloads to your computer.
Then, set an environmental variable which will provide the application credentials to your application when it runs locally:
$ export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/[FILE_NAME].json"
This error message is usually thrown when the application is not being authenticated correctly due to several reasons such as missing files, invalid credential paths, incorrect environment variables assignations, among other causes. Keep in mind that when you set an environment variable value in a session, it is reset every time the session is dropped.
Based on this, I recommend you to validate that the credential file and file path are being correctly assigned, as well as follow the Obtaining and providing service account credentials manually guide, in order to explicitly specify your service account file directly into your code; In this way, you will be able to set it permanently and verify if you are passing the service credentials correctly.
Passing the path to the service account key in code example:
def explicit():
from google.cloud import storage
# Explicitly use service account credentials by specifying the private key
# file.
storage_client = storage.Client.from_service_account_json('service_account.json')
# Make an authenticated API request
buckets = list(storage_client.list_buckets())
print(buckets)