I'm trying to print available buckets on AWS but failed. I tried multiple tutorials online and i would get cannot locate credentials and 's3.ServiceResource' object has no attribute errors.
s3 = boto3.resource('s3',aws_access_key_id = "Random",aws_secret_access_key = "Secret" )
client = s3.client('s3')
response = client.list_buckets()
print(response)
Can you try:
for bucket in s3.buckets.all():
print(bucket.name)
The problem is probably because you are defining s3 as a resource:
s3 = boto3.resource('s3')
But then you are trying to use it as a client:
client = s3.client('s3')
That won't work. If you want a client, create one with:
s3_client = boto3.client('s3')
Or, you can extract a client from the resource:
s3_resource = boto3.resource('s3')
response = s3_resource.meta.client.list_buckets()
Or, sticking with the resource, you can use:
s3_resource = boto3.resource('s3')
for bucket in s3_resource.buckets.all():
# Do something with bucket
Confused? Try to stick with one method. Client directly matches the underlying API calls made to S3 and is the same as all other languages. Resource is a more "Pythonic" way of accessing resources. The calls get translated to client API calls. Resources can be a little more challenging when figuring out required permissions, since there isn't a one-to-one mapping to actual API call.
Related
I have uploaded an object with the
client = boto3.client('s3', aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY)
response = client.put_object(
Bucket=BUCKET_NAME, Body=in_mem_file.getvalue(), Key=str(img_name))
and I'm generating the URL by
url = client.generate_presigned_url('get_object', Params={
'Bucket': BUCKET_NAME, 'Key': str(img_name)}, ExpiresIn=518400)
I need to generate the URL without expiring in some cases. Is it possible to generate url without expiring in s3 bucket?
I need to generate the URL without expiring in some cases
Its not possible. If you don't want to regenerate links every 7 days, then you have to look at other solutions for sharing S3 files. Often CloudFront with S3 is used. Otherwise, you need fully custom solution tailored to your specific need.
If the object is public, it has a permanent url of the form:
def get_object_url(region, bucket, object_key)
return f"https://{bucket}.s3.{region}.amazonaws.com/{object_key}"
Otherwise, you cannot have a permanent link.
I'm using AWS S3 boto3 to upload files to my AWS bucket called uploadtesting. Here is an example implementation:
import boto3
...
s3 = boto3.resource('s3')
s3.meta.client.upload_file('files/report.pdf', 'uploadtesting', 'report.pdf')
Accessing the object from the AWS S3 console allows you to see the object URL, however it is not a downloadable link. What I wanted to know is how can I use python to print out a downloadable link to the file I just uploaded?
It appears you are asking how to generate a URL that allows a private object to be downloaded.
This can be done by generating an Amazon S3 pre-signed URL, which provides access to a private S3 object for a limited time.
Basically, using credentials that have access to the object, you can create a URL that is 'signed'. When Amazon S3 receives this URL, it verifies the signature and provides access to the object if the expiry period has not ended.
From Presigned URLs — Boto3 documentation:
response = s3_client.generate_presigned_url('get_object',
Params={'Bucket': bucket_name,
'Key': object_name},
ExpiresIn=expiration)
The ExpiresIn parameter is expressed in seconds.
The format is:
https://BUCKET-NAME.s3.amazonaws.com/OBJECT-KEY
So your object would be:
https://uploadtesting.s3.amazonaws.com/report.pdf
There is no supplied function to generate this string, so use:
url = f'https://{bucket_name}.s3.amazonaws.com/{key}'
What would be the client alternate for this?
import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket('name')
I want to use client but that has a list_buckets function. Is there a way to pass a bucket name to client instead of getting the details from the result array??
This will give you the bucket names
import boto3
s3 = boto3.resource('s3')
buckets = s3.buckets.all()
for bucket in buckets:
print(bucket.name)
I don't think you can only get the names.
But there is a tricky way to use the client
import boto3
s3 = boto3.resource('s3')
s3.meta.client.list_buckets()['Buckets']
where it gives the client responses.
I am attempting to pull information about an S3 bucket using boto3. Here is the setup (bucketname is set to a valid S3 bucket name):
import boto3
s3 = boto3.client('s3')
result = s3.get_bucket_acl(Bucket=bucketname)
When I try, I get this error:
ClientError: An error occurred (InvalidRequest) when calling the
GetBucketAcl operation: S3 Transfer Acceleration is not configured on
this bucket
So, I attempt to enable transfer acceleration:
s3.put_bucket_accelerate_configuration(Bucket=bucketname, AccelerateConfiguration={'Status': 'Enabled'})
But, I get this error, which seems silly, since the line above is attempting to configure the bucket. I do have IAM rights (Allow: *) to modify the bucket too:
ClientError: An error occurred (InvalidRequest) when calling the
PutBucketAccelerateConfiguration operation: S3 Transfer Acceleration
is not configured on this bucket
Does anyone have any ideas on what I'm missing here?
Although I borrowed the code in the original question from the boto3 documentation, this construct is not complete and did not provide the connectivity that I expected:
s3 = boto3.client('s3')
What is really needed are fully-initialized session and client handlers, like this (assuming that the profile variable is set correctly in the ~/.aws/config file and bucketname is a valid S3 bucket):
from boto3 import Session
session = Session(profile_name=profile)
client = session.client('s3')
result = client.get_bucket_acl(Bucket=bucketname)
After doing this (duh), I was able to connect with or without transfer acceleration.
Thanks to the commenters, since those comments led me to the solution.
I am successfully authenticating with AWS and using the 'put_object' method on the Bucket object to upload a file. Now I want to use the multipart API to accomplish this for large files. I found the accepted answer in this question:
How to save S3 object to a file using boto3
But when trying to implement I am getting "unknown method" errors. What am I doing wrong? My code is below. Thanks!
## Get an AWS Session
self.awsSession = Session(aws_access_key_id=accessKey,
aws_secret_access_key=secretKey,
aws_session_token=session_token,
region_name=region_type)
...
# Upload the file to S3
s3 = self.awsSession.resource('s3')
s3.Bucket('prodbucket').put_object(Key=fileToUpload, Body=data) # WORKS
#s3.Bucket('prodbucket').upload_file(dataFileName, 'prodbucket', fileToUpload) # DOESNT WORK
#s3.upload_file(dataFileName, 'prodbucket', fileToUpload) # DOESNT WORK
The upload_file method has not been ported over to the bucket resource yet. For now you'll need to use the client object directly to do this:
client = self.awsSession.client('s3')
client.upload_file(...)
Libcloud S3 wrapper transparently handles all the splitting and uploading of the parts for you.
Use upload_object_via_stream method to do so:
from libcloud.storage.types import Provider
from libcloud.storage.providers import get_driver
# Path to a very large file you want to upload
FILE_PATH = '/home/user/myfile.tar.gz'
cls = get_driver(Provider.S3)
driver = cls('api key', 'api secret key')
container = driver.get_container(container_name='my-backups-12345')
# This method blocks until all the parts have been uploaded.
extra = {'content_type': 'application/octet-stream'}
with open(FILE_PATH, 'rb') as iterator:
obj = driver.upload_object_via_stream(iterator=iterator,
container=container,
object_name='backup.tar.gz',
extra=extra)
For official documentation on S3 Multipart feature, refer to AWS Official Blog.