Although I have given the function IAM profile complete AdministratorAccess permissions, along with AmazonDynamoDBFullAccess permissions as well, every time that I test the function I am greeted with the same error message:
no identity-based policy allows the dynamodb:PutItem action.
How do I fix this? I literally cannot give the IAM profile more access, so I am very confused. I have given every permission I can give.
2 things I can think on
Check you are assigning the policies to the Lambda Execution Role.
https://docs.aws.amazon.com/lambda/latest/dg/lambda-permissions.html
Are you part of an organization, check that you no not have any SCP policies in place preventing the PutItem as it would take precedence.
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html
Related
is there a way in boto3 to get the access level of a service in a policy (Full access, List, Tagging, Read, Write)? The allowed actions are too much, I just need the access levels.
For example I have the "IAMUserChangePassword"-Policy.
The allowed service in that policy is "IAM" and the access levels are "Read, Write". Now I want to write some python code to return a list of all access levels. I do not need the actions "iam:GetAccountPasswordPolicy, iam:ChangePassword", I just need the access levels.
No, this is not possible.
While the IAM console does provide a 'user-friendly' version of policies by showing checkboxes with Read, Write, List, etc, this level of information is not available through an API. The console must have some additional logic that parses the policies to extract this information.
I want to change Git credential for AWS CodeCommit to Active/Inactive using Boto3.
I tried to use update_service_specific_credential but I got this error:
An error occurred (InvalidClientTokenId) when calling the CreateServiceSpecificCredential operation: The security token included in the request is invalid: ClientError
My code:
iamClient = boto3.client('iam')
response=iamClient.update_service_specific_credential(UserName="****",
ServiceSpecificCredentialId="*****",Status="Active")
someone tried to use it?
Any advice?
Thanks!
AWS errors are often purposefully opaque/non-specific so could you give a bit more detail? Specifically, are the user performing the update and the user whose credentials are being updated two different users? There may be a race condition arising if the user being updated IS the user performing the update.
I want to programatically get all the actions a user is allowed to do across aws services.
I've tried to fiddle with simulate_principal_policy but it seems this method expects a list of all actions, and I don't want to maintain a hard-coded list.
I also tried to call it with iam:* for example and got a generic 'implicitDeny' response so I know the user is not permitted all the actions but I require a higher granularity of actions.
Any ideas as to how do I get the action list dynamically?
Thanks!
To start with, there is no programmatic way to retrieve all possible actions (regardless of whether they are permitted to use an action).
You would need to construct a list of possible actions before checking the security. As an example, the boto3 SDK for Python contains an internal list of commands that it uses to validate commands before sending them to AWS.
Once you have a particular action, you could use Access the Policy Simulator API to validate whether a given user would be allowed to make a particular API call. This is much easier than attempting to parse the various Allow and Deny permissions associated with a given user.
However, a call might be denied based upon the specific parameters of the call. For example, a user might have permissions to terminate any Amazon EC2 instance that has a particular tag, but cannot terminate all instances. To correctly test this, an InstanceId would need to be provided to the simulation.
Also, permissions might be restricted by IP Address and even Time of Day. Thus, while a user would have permission to call an Action, where and when they do it will have an impact on whether the Action is permitted.
Bottom line: It ain't easy! AWS will validate permissions at the time of the call. Use the Policy Simulator to obtain similar validation results.
I am surprised no one has answered this question correctly. Here is code that uses boto3 that addresses the OP's question directly:
import boto3
session = boto3.Session('us-east-1')
for service in session.get_available_services ():
service_client = session.client (service)
print (service)
print (service_client.meta.service_model.operation_names)
IAM, however, is a special case as it won't be listed in the get_available_services() call above:
IAM = session.client ('iam')
print ('iam')
print (IAM.meta.service_model.operation_names)
I am running the k-means example in SageMaker:
from sagemaker import KMeans
data_location = 's3://{}/kmeans_highlevel_example/data'.format(bucket)
output_location = 's3://{}/kmeans_example/output'.format(bucket)
kmeans = KMeans(role=role,
train_instance_count=2,
train_instance_type='ml.c4.8xlarge',
output_path=output_location,
k=10,
data_location=data_location)
When I run this line, it appears access denied error.
%%time
kmeans.fit(kmeans.record_set(train_set[0]))
The error returns:
ClientError: An error occurred (AccessDenied) when calling the
PutObject operation: Access Denied
I also read other questions, but their answers do not solve my problem.
Would you please look at my case?
To be able to training a job in SageMaker, you need to pass in an AWS IAM role allowing SageMaker to access your S3 bucket.
The error means that SageMaker does not have permissions to write files in the bucket that you specified.
You can find the permissions that you need to add to your role hereL https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html#sagemaker-roles-createtrainingjob-perms
sagemaker aws-sagemaker role
Another thing to consider, if you are using an encrypted bucket, that requires kms decryption, make sure to also include kms related permissions
I've noticed sometimes the error shown is PutObject operation: Access Denied while failure is actually KMS related.
I faced the same problem. My Sagemaker Notebook Instance wasn't able to read or write files to my S3 bucket. First step of troubleshooting is locating the role for your **Sagemaker Instance **. You can do that by checking this section
Then go to this specific role from IAM and attach another policy to the role
I attached S3 Full access but you can create a custom policy.
I was getting confused because I was logged in using the admin user. However, when you go with a Sagemaker Instance your user policies/roles will not be used to perform actions.
In my case I had just forgotten to rename the s3 bucket name from the default given to something that is unique
I would like to know about a more efficient way than renewing sts role for a cross account role when it run on lambda. By definition those roles last for 1h per default, but so far i'm doing it this way:
def aws_session(role_arn, session_name):
_ = boto3.client('sts')
resp = _.assume_role(RoleArn=role_arn, RoleSessionName=session_name)
session = boto3.Session(
aws_access_key_id=response['Credentials']['AccessKeyId'],
aws_secret_access_key=response['Credentials']['SecretAccessKey'],
aws_session_token=response['Credentials']['SessionToken'],
region_name='us-east-1')
return session
def lambda_handler(event, context):
session = aws_session(role_arn=ARN, session_name='CrossAccountLambdaRole')
s3_sts = session.resource('s3')
But it terribly inefficient because instead of ~300ms, renewing credentials take more than ~1500 ms each time and as we all know, we are charged on the duration execution. Anyone could help me on how to refresh this only when the token expire ? Coz between execution, we are not sure to endup using the same "container", so how to make global variable?
Thx a lot
Remove AssumeRole
I think your problem stems from the fact that your code is picking the role it needs on each run. Your assume role code should indeed be generating a new token on each call. I'm not familiar with the Python Boto library but in Node I only call AssumeRole when I'm testing locally and want to pull down new credentials, I save those credentials and never call assume role again until I want new creds. Every time I call assume role, I get new credentials as expected. You don't need STS directly to run your lambda functions.
An Alternate Approach:
For the production application my Lambda code does not pick its role. The automation scripts that build the Lambda function assign it a role and the lambda function will use that role for ever, with AWS managing the refresh of credentials on the back-end as they expire. You can do this by building your Lambda function in CloudFormation specifying what role you want it to use.
Lambda via CloudFormation
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-function.html
If you then want to view what credentials your function is operating with you can print out environment variables. Lambda will pass in temporary credentials to your function and the credentials will be associated with the role you've defined.
Simpler Approach
If you don't want to deal with CloudFormation deploy your function manually into the AWS console and in the console specify the role it should run with. But the bottom line is you don't need to use STS inside your lambda code. Assign the role externally.
Since your going across accounts, you obviously can't follow the advice many people say of attaching directly to the lambda.
Your best option is parameter store which is covered in detail here:
https://aws.amazon.com/blogs/compute/sharing-secrets-with-aws-lambda-using-aws-systems-manager-parameter-store/
Simply have lambda request the credentials from there instead.
That said, it's probably not going to save much time compared to STS requests... But I've not timed either process.
A perhaps less-good way, that's fairly simple, is to store the credentials in /TMP and build a process around enduring the credentials remain valid -- perhaps assume role with 65 minute duration, and save to a time stamped file with the minutes/seconds dropped. If the file exists, read it in by file I/O.
Keep in mind your saving credentials in a way that can be compromised if your code allows access to read the file in some way... Though as a lambda and with shared responsibility security, it's reasonably secure compared to doing this strategy on a persistent server.
Always use least privilege roles. Only allow your trusted account to assume this role... I think your can even lock trust policies down to a specific incoming lambda role as allowed to assume role in. This way leaked credentials by somehow reading/outputting the file require a malicious user to compromise some other aspect of your account (if locked down by account number only), or execute remote code execution inside your lambda itself (if locked to lambda).... Though, at that point, your credentials are already available to the malicious user to use anyways.