I want to change Git credential for AWS CodeCommit to Active/Inactive using Boto3.
I tried to use update_service_specific_credential but I got this error:
An error occurred (InvalidClientTokenId) when calling the CreateServiceSpecificCredential operation: The security token included in the request is invalid: ClientError
My code:
iamClient = boto3.client('iam')
response=iamClient.update_service_specific_credential(UserName="****",
ServiceSpecificCredentialId="*****",Status="Active")
someone tried to use it?
Any advice?
Thanks!
AWS errors are often purposefully opaque/non-specific so could you give a bit more detail? Specifically, are the user performing the update and the user whose credentials are being updated two different users? There may be a race condition arising if the user being updated IS the user performing the update.
Related
Although I have given the function IAM profile complete AdministratorAccess permissions, along with AmazonDynamoDBFullAccess permissions as well, every time that I test the function I am greeted with the same error message:
no identity-based policy allows the dynamodb:PutItem action.
How do I fix this? I literally cannot give the IAM profile more access, so I am very confused. I have given every permission I can give.
2 things I can think on
Check you are assigning the policies to the Lambda Execution Role.
https://docs.aws.amazon.com/lambda/latest/dg/lambda-permissions.html
Are you part of an organization, check that you no not have any SCP policies in place preventing the PutItem as it would take precedence.
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html
I am new to AWS.
I have a user account and two roles, one for prod one for test.
Usually I log into my account and switch to prod role to run some simple select queries.
Now I want to use Athena locally in Python with PyAthena.
I have tried the following resources from PyAthena documentation:
from pyathena import connect
import pandas as pd
conn = connect(aws_access_key_id='YOUR_ACCESS_KEY_ID',
aws_secret_access_key='YOUR_SECRET_ACCESS_KEY',
s3_staging_dir='s3://YOUR_S3_BUCKET/path/to/',
region_name='us-west-2')
df = pd.read_sql("SELECT * FROM many_rows", conn)
print(df.head())
But always having the error
An error occurred (AccessDeniedException) when calling the StartQueryExecution operation: User: arn:aws:iam::xxxxxx:user/xxxx#xxxxx is not authorized to perform: athena:StartQueryExecution on resource: arn:aws:athena:ap-southeast-2:xxxxx:workgroup/primary
This is the exact error I would get if I run the same query using my user account without switching the role.
I have also tried to add a profile name parameter in connect but still not working even though the env is correctly recognised.
Could someone help me how to do the 'switch' role step in local python code?
It seems like the issue is due to a missing role and not the profile_name parameter. If you look at the Connection class in pyathena, there's a role_arn variable that you can specify while initializing the connection. Here's the line I'm talking about.
You might want to try it this way -
conn = connect(aws_access_key_id='YOUR_ACCESS_KEY_ID',
aws_secret_access_key='YOUR_SECRET_ACCESS_KEY',
s3_staging_dir='s3://YOUR_S3_BUCKET/path/to/',
region_name='us-west-2',
role_arn='<your arn here'>)
I haven't tested it myself though since I do not have an Athena setup.
I'm trying to access Azure databricks spark cluster by a python script which takes token as an input generated via databricks user settings and calling a Get method to get the details of the cluster alongwith the cluster-id.
The below is the code snippet. As shown, I have created a cluster in southcentralus zone.
import requests
headers = {"Authorization":"Bearer dapiad************************"}
data=requests.get("https://southcentralus.azuredatabricks.net/api/2.0/clusters/get?cluster_id=**************",headers=headers).text
print data
Expected result should give the full detail of the cluster eg.
{"cluster_id":"0128-******","spark_context_id":3850138716505089853,"cluster_name":"abcdxyz","spark_version":"5.1.x-scala2.11","spark_conf":{"spark.databricks.delta.preview.enabled":"true"},"node_type_id" and so on .....}
The above code is working when I execute the code on google colaboratory whereas the same is not working with my local IDE i.e. idle. It gives the error of HTTP 403 stating as below:
<p>Problem accessing /api/2.0/clusters/get. Reason:
<pre> Invalid access token.</pre></p>
Can anyone help me resolve the issue? I'm stuck on this part and not able to access the cluster through APIs.
It could be due to encoding issue when you pass the secret. Please look into this issue and how to resolve it. Even though the resolution they have given for AWS,it could be similar one for Azure as well. Your secret might be having "/", which you have to replace.
There is a known problem in the last update related to the '+'
character in secret keys. In particular, we no longer support escaping
'+' into '%2B', which some URL-encoding libraries do.
The current best-practice way of encoding your AWS secret key is
simply
secretKey.replace("/","%2F")
sample python script is given below:
New_Secret_key = "MySecret/".replace("/","%2F")
https://forums.databricks.com/questions/6590/s3serviceexception-raised-when-accessing-via-mount.html
https://forums.databricks.com/questions/6621/responsecode403-responsemessageforbidden.html
I am running the k-means example in SageMaker:
from sagemaker import KMeans
data_location = 's3://{}/kmeans_highlevel_example/data'.format(bucket)
output_location = 's3://{}/kmeans_example/output'.format(bucket)
kmeans = KMeans(role=role,
train_instance_count=2,
train_instance_type='ml.c4.8xlarge',
output_path=output_location,
k=10,
data_location=data_location)
When I run this line, it appears access denied error.
%%time
kmeans.fit(kmeans.record_set(train_set[0]))
The error returns:
ClientError: An error occurred (AccessDenied) when calling the
PutObject operation: Access Denied
I also read other questions, but their answers do not solve my problem.
Would you please look at my case?
To be able to training a job in SageMaker, you need to pass in an AWS IAM role allowing SageMaker to access your S3 bucket.
The error means that SageMaker does not have permissions to write files in the bucket that you specified.
You can find the permissions that you need to add to your role hereL https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html#sagemaker-roles-createtrainingjob-perms
sagemaker aws-sagemaker role
Another thing to consider, if you are using an encrypted bucket, that requires kms decryption, make sure to also include kms related permissions
I've noticed sometimes the error shown is PutObject operation: Access Denied while failure is actually KMS related.
I faced the same problem. My Sagemaker Notebook Instance wasn't able to read or write files to my S3 bucket. First step of troubleshooting is locating the role for your **Sagemaker Instance **. You can do that by checking this section
Then go to this specific role from IAM and attach another policy to the role
I attached S3 Full access but you can create a custom policy.
I was getting confused because I was logged in using the admin user. However, when you go with a Sagemaker Instance your user policies/roles will not be used to perform actions.
In my case I had just forgotten to rename the s3 bucket name from the default given to something that is unique
def getthings():
client = boto3.client('iot', region_name='name')
response = client.list_things(nextToken='string', maxResults=123, attributeName='string', attributeValue='string', thingTypeName='string')
I am a beginner in Python, I have the following code that I get from the AWS documentation to have the list of things in AWS IoT. I have the following error :
InvalidRequestException: An error occurred (InvalidRequestException)
when calling the ListThings operation.
What is the problem ?
Though there is not much details to this exception the following steps could help resolve faster.
You can try the with python console first.
You need to make sure you have AWS-CLI installed
Ensure you have AWS configure for configuring your access key and secret keys
By the way the following code worked to determine the list of things:
client=boto3.client('iot')
response = client.list_things(maxResults=123, thingTypeName='appropriate thing type')