I have created a small app that should download file from a AWS S3.
I can download the data correctly in this way:
s3_client = boto3.resource('s3')
req = s3_client.meta.client.download_file(bucket, ob_key, dest)
but if I add this function in a flask route it does not work anymore. I obtain this error:
ClientError: An error occurred (400) when calling the HeadObject operation: Bad Request
I'm not able to figure out why it does not work inside the route. Any idea?
That is related to your AWS region. Mention the region name as an added parameter.
Try it on your local machine, using
aws s3 cp s3://bucket-name/file.png file.png --region us-east-1
If you are able to download the file using this command, then it should work fine from your API also.
The problem was that with flask I needed to declare s3_client as global variable instead of just inside the function.
Now it works perfectly!
Related
I have a file on codecommit with the uri:
codecommit://FruitLoops/apples/granny_smith.json
And when I tried:
import boto3
client = boto3.session(some_key, some_secret, key)
repo = "FruitLoops"
client.get_file(repositoryName="FruitLoops", filePath="apples/granny_smith.json"
It's throwing an error:L
RepositoryDoesNotExistException: An error ocurred (RepositoryDoesNotExistException) when calling GetFile operation on: FruitLoops does not exist
I've tried searching around on Google and found the example https://github.com/boto/boto3/issues/2329 and https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/codecommit.html
But when I used the CLI, the repository is there
aws codecommit get-file --repository-name FruitLoops --file-path "apples/granny_smith.json" --query fileContent --output text
What is the right syntax in boto3 to access the file through codecommit?
You are not passing the region name. Here is the documentation which explains how you can do that.
I wrote a python script to automatically download a file from a DB and upload it to an S3 account we own. The script works from the PC and I'm successfully pinging Amazon S3 from within the Kubernetees we are working on, but I'm getting 503 when the script tries to upload/download a file from the S3. I'm using the following installation: 'python3.6 -m pip install boto3'
and getting the following error: "otocore.exceptions.ClientError: An error occurred (503) when calling the GetObject operation (reached max retries: 15): Service Unavailable"
I tried adding/removing SSL, changing timeout and max retries, and nothing seems to help. Also tried different boto3 objects (client, session etc.)
The code that crashes is the following: (the line that crashes is the one marked with **)
def write_to_s3():
s3 = get_s3()
object1 = s3.Object(BUCKET_NAME, FILENAME)
print(object1)
**test = object1.get()**
latest_num = int(str(object1.get()['Body'].read())[2:-1])
print(str(latest_num))
...
def get_s3():
my_config = Config(
region_name=REGION,
connect_timeout=25,
retries={
'max_attempts': 15,
'mode': 'standard'
}
)
return boto3.resource('s3', use_ssl=False, config=my_config, aws_access_key_id=os.environ.get("ACCESS_KEY_ID"),
aws_secret_access_key=os.environ.get("SECRET_ACCESS_KEY"))
I really do not understand why this happens and found no answers or similar errors on the web. Please help!
I'm just starting out with boto3 and lambda and was trying to run the below function via Pycharm.
import boto3
client = boto3.client('rds')
response = client.stop_db_instance(
DBInstanceIdentifier='dummy-mysql-rds'
)
But i receive the below error:
botocore.errorfactory.DBInstanceNotFoundFault: An error occurred (DBInstanceNotFound) when calling the StopDBInstance operation: DBInstance dummy-mysql-rds not found.
Do you know what may be causing this?
For the record, I have the AWS toolkit installed for Pycharm and can run simple functions to list and describe ec2 instances and my AWS profile has admin access.
By explicitly defining the profile name the below function now works via Pycharm. Thank you #OleksiiDonoha for your help in getting this resolved.
import boto3
rds = boto3.setup_default_session(profile_name='dev')
client = boto3.client('rds')
response = client.stop_db_instance(
DBInstanceIdentifier='dev-mysql-rds'
)
The code below fails in row s3 = boto3.client('s3') returning error botocore.exceptions.InvalidConfigError: The source profile "default" must have credentials.
def connect_s3_boto3():
try:
os.environ["AWS_PROFILE"] = "a"
s3 = boto3.client('s3')
return s3
except:
raise
I have set up the key and secret using aws configure
My file vim ~/.aws/credentials looks like:
[default]
aws_access_key_id = XXXXXXXXXXXXXXXXX
aws_secret_access_key = YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
My file vim ~/.aws/config looks like:
[default]
region = eu-west-1
output = json
[profile b]
region=eu-west-1
role_arn=arn:aws:iam::XX
source_profile=default
[profile a]
region=eu-west-1
role_arn=arn:aws:iam::YY
source_profile=default
[profile d]
region=eu-west-1
role_arn=arn:aws:iam::EE
source_profile=default
If I run aws-vault exec --no-session --debug a
it returns:
aws-vault: error: exec: Failed to get credentials for a9e: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: 7087ea72-32c5-4b0a-a20e-fd2da9c3c747
I noticed you tagged this question with "docker". Is it possible that you're running your code from a Docker container that does not have your AWS credentials in it?
Use a docker volume to pass your credential files into the container:
https://docs.docker.com/storage/volumes/
It is not a good idea to add credentials into a container image because anybody who uses this image will have and use your credentials.
This is considered a bad practice.
For more information how to properly deal with secrets see https://docs.docker.com/engine/swarm/secrets/
I ran into this problem while trying to assume a role on an ECS container. It turned out that in such cases, instead of source_profile, credential_source should be used. It takes the value of EcsContainer for the container, Ec2InstanceMetadata for the EC2 machine or Environment for other cases.
Since the solution is not very intuitive, I thought it might save someone the trouble despite the age of this question.
Finally the issue is that Docker didn't had the credentials. And despite connect through bash and add them, it didn't work.
So, in the dockerfile I added:
ADD myfolder/aws/credentials /root/.aws/credentials
To move my locahost credentials files added through aws cli using aws configure to the docker. Then, I build the docker again and it works.
I am attempting to pull information about an S3 bucket using boto3. Here is the setup (bucketname is set to a valid S3 bucket name):
import boto3
s3 = boto3.client('s3')
result = s3.get_bucket_acl(Bucket=bucketname)
When I try, I get this error:
ClientError: An error occurred (InvalidRequest) when calling the
GetBucketAcl operation: S3 Transfer Acceleration is not configured on
this bucket
So, I attempt to enable transfer acceleration:
s3.put_bucket_accelerate_configuration(Bucket=bucketname, AccelerateConfiguration={'Status': 'Enabled'})
But, I get this error, which seems silly, since the line above is attempting to configure the bucket. I do have IAM rights (Allow: *) to modify the bucket too:
ClientError: An error occurred (InvalidRequest) when calling the
PutBucketAccelerateConfiguration operation: S3 Transfer Acceleration
is not configured on this bucket
Does anyone have any ideas on what I'm missing here?
Although I borrowed the code in the original question from the boto3 documentation, this construct is not complete and did not provide the connectivity that I expected:
s3 = boto3.client('s3')
What is really needed are fully-initialized session and client handlers, like this (assuming that the profile variable is set correctly in the ~/.aws/config file and bucketname is a valid S3 bucket):
from boto3 import Session
session = Session(profile_name=profile)
client = session.client('s3')
result = client.get_bucket_acl(Bucket=bucketname)
After doing this (duh), I was able to connect with or without transfer acceleration.
Thanks to the commenters, since those comments led me to the solution.