What would be the client alternate for this?
import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket('name')
I want to use client but that has a list_buckets function. Is there a way to pass a bucket name to client instead of getting the details from the result array??
This will give you the bucket names
import boto3
s3 = boto3.resource('s3')
buckets = s3.buckets.all()
for bucket in buckets:
print(bucket.name)
I don't think you can only get the names.
But there is a tricky way to use the client
import boto3
s3 = boto3.resource('s3')
s3.meta.client.list_buckets()['Buckets']
where it gives the client responses.
Related
I am facing something similar to How to load file from custom hosted Minio s3 bucket into pandas using s3 URL format?
however, I already have an initialized s3 session (from boto3).
How can I get the credentials returned from it to feed these directly to pandas?
I.e. how can I extract the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY from the initialized boto3 s3 client?
You can use session.get_credentials
import boto3
session = boto3.Session()
credentials = session.get_credentials()
AWS_ACCESS_KEY_ID = credentials.access_key
AWS_SECRET_ACCESS_KEY = credentials.secret_key
AWS_SESSION_TOKEN = credentials.token
If you only have access to boto client (like the S3 client), you can find the credentials hidden here:
client = boto3.client("s3")
client._request_signer._credentials.access_key
client._request_signer._credentials.secret_key
client._request_signer._credentials.token
If you don't want to handle credentials (I assume you're using the SSO here), you can load the S3 object directly with pandas: pd.read_csv(s3_client.get_object(Bucket='Bucket', Key ='FileName').get('Body'))
Code:
from google.cloud import storage
client = storage.Client()
bucket = ['symbol_wise_nse', 'symbol_wise_final']
for i in bucket:
if client.get_bucket(i).exists():
BUCKET = client.get_bucket(i)
if the bucket exists i want to do client.get_bucket. How to check whether the bucket exists or not?
Another option that doesn't use try: except is:
from google.cloud import storage
client = storage.Client()
bucket = ['symbol_wise_nse', 'symbol_wise_final']
for i in bucket:
BUCKET = client.bucket(i)
if BUCKET.exists():
BUCKET = client.get_bucket(i)
There is no method to check if the bucket exists or not, however you will get an error if you try to access a non existent bucket.
I would recommend you to either list the buckets in the project with storage_client.list_buckets() and then use the response to confirm if the bucket exists in your code, or if you wish to perform the client.get_bucket in every bucket in your project, you can just iterate through the response directly.
Hope you find this information useful
You can use something like this:
from google.cloud import storage
client = storage.Client()
buckets = ['symbol_wise_nse', 'symbol_wise_final']
for i in buckets:
try:
bucket = client.get_bucket(i)
print(bucket)
except:
pass
The following worked for me (re-using params in question):
from google.cloud import storage
from google.cloud.storage import Bucket
client = storage.Client()
exists = Bucket(client, 'symbol_wise_nse').exists()
I'm trying to print available buckets on AWS but failed. I tried multiple tutorials online and i would get cannot locate credentials and 's3.ServiceResource' object has no attribute errors.
s3 = boto3.resource('s3',aws_access_key_id = "Random",aws_secret_access_key = "Secret" )
client = s3.client('s3')
response = client.list_buckets()
print(response)
Can you try:
for bucket in s3.buckets.all():
print(bucket.name)
The problem is probably because you are defining s3 as a resource:
s3 = boto3.resource('s3')
But then you are trying to use it as a client:
client = s3.client('s3')
That won't work. If you want a client, create one with:
s3_client = boto3.client('s3')
Or, you can extract a client from the resource:
s3_resource = boto3.resource('s3')
response = s3_resource.meta.client.list_buckets()
Or, sticking with the resource, you can use:
s3_resource = boto3.resource('s3')
for bucket in s3_resource.buckets.all():
# Do something with bucket
Confused? Try to stick with one method. Client directly matches the underlying API calls made to S3 and is the same as all other languages. Resource is a more "Pythonic" way of accessing resources. The calls get translated to client API calls. Resources can be a little more challenging when figuring out required permissions, since there isn't a one-to-one mapping to actual API call.
I know bucket name, I have access to it, I can browse it by web and by awscli.
How access it by Python's boto3? All examples assume accessing my own buckets:
import boto3
s3 = boto3.resource('s3')
for bucket in s3.buckets.all():
print(bucket.name)
How to reach other's bucket?
If you have access to someone else's bucket and you know the name of that bucket you can access it like
import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket('some-bucket-i-have-access-to')
for obj in bucket.objects.all():
print(obj.key)
I'm implementing Boto3 to upload files to S3, and all works fine. The process that I'm doing is the following:
I get base64 image from FileReader Javascript object. Then I send the base64 by ajax to the server, I decode the base64 image and I generate a random name to rename the key argument
data = json.loads(message['text'])
dec = base64.b64decode(data['image'])
s3 = boto3.resource('s3')
s3.Bucket('bucket_name').put_object(Key='random_generated_name.png', Body=dec,ContentType='image/png',ACL='public-read')
This works fine but respect to performance, is there a better way to improve it?
I used this and I believe its more effective and pythonic.
import boto3
s3 = boto3.client('s3')
bucket = 'your-bucket-name'
file_name = 'location-of-your-file'
key_name = 'name-of-file-in-s3'
s3.upload_file(file_name, bucket, key_name)