I am trying to mock AWS s3 api calls using boto2.
I create local s3 endpoint using localstack and can use this using boto3 easily as below,
import boto3
s3_client = boto3.client('s3', endpoint_url='http://localhost:4572')
bucket_name = 'my-bucket'
s3_client.create_bucket(Bucket=bucket_name)
But I did not find way to do this using boto2. Is there any way preferably using ~/.boto or ~/.aws/config?
Tried providing endpoint with boto2 but it failed.
import boto
boto.s3.S3RegionInfo(name='test-s3-region', endpoint='http://127.0.0.1:4572/')
s3 = boto.s3.connect_to_region('test-s3-region')
print s3.get_bucket('test-poc')
error:
AttributeError: 'NoneType' object has no attribute 'get_bucket'
I am looking to use local endpoints for all AWS services for testing purpose.
This works for me:
import boto
from boto.s3.connection import S3Connection
region = boto.s3.S3RegionInfo(name='test-s3-region', endpoint='http://127.0.0.1:4572/', connection_cls=S3Connection)
conn = region.connect()
print conn.get_bucket('test-poc')
You need to set the connection_cls attribute wish is NoneType by default.
Related
I am trying to download files from a public aws s3 from this website with python scripts. For example, the first the object on the link. I tried boto3 and I got a No Credentials error:
s3 = boto3.resource('s3')
bucket = s3.Bucket('oedi-data-lake')
keys = []
for obj in bucket.objects.filter(Prefix='nrel-pds-building-stock/end-use-load-profiles-for-us-building-stock/2022/resstock_tmy3_release_1/building_energy_models/upgrade=10/'):
if obj.key.endswith('bldg0000001-up10.zip'):
keys.append(obj.key)
print(keys)
I also found a post Download file/folder from Public AWS S3 with Python, no credentials
and I tried as the following:
import requests
headers = {'Host' : 'oedi-data-lake.s3.amazonaws.com'}
url = 'https://oedi-data-lake.s3.amazonaws.com/nrel-pds-building-stock/end-use-load-profiles-for-us-building-stock/2022/resstock_tmy3_release_1/building_energy_models/upgrade=10/bldg0000001-up10.zip'
r = requests.get(url)
but got a SSLCertVerificationError
Please help. :)
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Thank you, jhashimoto!
But by doing the following, I still have the NoCredentialsError
import boto3
from botocore import UNSIGNED
from botocore.config import Config
s3 = boto3.resource("s3", config=Config(signature_version=UNSIGNED))
s3_client = boto3.client('s3')
s3_client.download_file('oedi-data-lake', 'nrel-pds-building-stock/end-use-load-profiles-for-us-building-stock/2022/resstock_tmy3_release_1/building_energy_models/upgrade=10/bldg0000001-up10.zip', 'bldg1.zip')
I also read can-i-use-boto3-anonymously and changed the code as below:
import boto3
from botocore import UNSIGNED
from botocore.config import Config
client = boto3.client('s3', aws_access_key_id='', aws_secret_access_key='')
client._request_signer.sign = (lambda *args, **kwargs: None)
client.download_file('oedi-data-lake', 'nrel-pds-building-stock/end-use-load-profiles-for-us-building-stock/2022/resstock_tmy3_release_1/building_energy_models/upgrade=10/bldg0000001-up10.zip', 'bldg01.zip')
and got SSLCertVerificationError.
is this something that caused by the company security policy?
Sorry for the naive questions. Completely new on AWS.
thank you so much
To access a bucket that allows anonymous access, configure it not to use credentials.
import boto3
from botocore import UNSIGNED
from botocore.config import Config
s3 = boto3.resource("s3", config=Config(signature_version=UNSIGNED))
# output:
# ['nrel-pds-building-stock/end-use-load-profiles-for-us-building-stock/2022/resstock_tmy3_release_1/building_energy_models/upgrade=10/bldg0000001-up10.zip']
python - Can I use boto3 anonymously? - Stack Overflow
Yes. Your credentials are used to sign all the requests you send out, so what you have to do is configure the client to not perform the signing step at all.
Note:
Unrelated to the main topic, The AWS Python SDK team does not intend to add new features to the resource interface. You can use the client interface instead.
Resources — Boto3 Docs 1.26.54 documentation
The AWS Python SDK team does not intend to add new features to the resources interface in boto3. Existing interfaces will continue to operate during boto3's lifecycle. Customers can find access to newer service features through the client interface.
Added at 2023/01/21 12:00:
This is a sample code using the client interface.
import boto3
from botocore import UNSIGNED
from botocore.config import Config
s3_client = boto3.client('s3', config=Config(signature_version=UNSIGNED))
s3_client.download_file('oedi-data-lake', 'nrel-pds-building-stock/end-use-load-profiles-for-us-building-stock/2022/resstock_tmy3_release_1/building_energy_models/upgrade=10/bldg0000001-up10.zip', 'bldg1.zip')
I have an access in Google Cloud and AWS. I wanted to upload a file from Vertex AI Workbench to AWS S3, is that possible? Or there is an alternative way?
I have read some tread that might help me, and have try some code, but still can't solve my problem, and raise an error
Could not connect to the endpoint URL:
"https://xyz.s3.auto.amazonaws.com/uploaded.csv?uploads"
Here is my code
import boto3
import os
import io
s3 = boto3.resource('s3')
key_id="my_key"
access_key="my_access_key"
client = boto3.client("s3", region_name="auto", aws_access_key_id=key_id, aws_secret_access_key=access_key)
client.upload_file(
Filename="path_file.csv",
Bucket="bucket_name",
Key="uploaded.csv",
)
I think the issue here is you're using region=auto for AWS which is not supported. The region needs to be real region because (you can see in the error) it's being used to pick the endpoint.
Try it without that.
import os
import io
s3 = boto3.resource('s3')
key_id="my_key"
access_key="my_access_key"
client = boto3.client("s3", aws_access_key_id=key_id, aws_secret_access_key=access_key)
client.upload_file(
Filename="path_file.csv",
Bucket="bucket_name",
Key="uploaded.csv",
)
I would like to use a boto3 config object to configure connection timeout and other attributes when interacting with DynamoDB through boto3. I have already written my code using a boto3 resource, and all the examples I have been able to find leverage a boto3 client instead when using a config object.
Is it possible to use a config object with a boto3 resource, and if not, why?
I learned that a boto3 resource objects does in fact accept a parameter for a config object. So I was able to define a config object in my wrapper class:
from botocore.config import Config
..
self.config = Config(
connect_timeout = 1,
read_timeout =1
)
And then later do this:
self.dynamodb = boto3.resource('dynamodb', config = self.config)
conn = tinys3.Connection(S3_ACCESS_KEY,S3_SECRET_KEY)
f = open('sample.zip','rb')
conn.upload('sample.zip',f,bucketname)
I can upload the file to my bucket (test) via the code above, but I want to upload it directly to test/images/example. I am open to moving over to boto, but I can't seem to import boto.s3 in my environment.
I have looked through How to upload a file to directory in S3 bucket using boto but none of the tinys3 examples show this.
import boto3
client = boto3.client('s3', region_name='ap-southeast-2')
client.upload_file('/tmp/foo.txt', 'my-bucket', 'test/images/example/foo.txt')
The following worked for me
from boto3.s3.transfer import S3Transfer
from boto3 import client
client_obj = client('s3',
aws_access_key_id='my_aws_access_key_id',
aws_secret_access_key='my_aws_secret_access_key')
transfer = S3Transfer(client_obj)
transfer.upload_file(src_file,
'my_s3_bucket_name',
dst_file,
extra_args={'ContentType': "application/zip"})
I am successfully authenticating with AWS and using the 'put_object' method on the Bucket object to upload a file. Now I want to use the multipart API to accomplish this for large files. I found the accepted answer in this question:
How to save S3 object to a file using boto3
But when trying to implement I am getting "unknown method" errors. What am I doing wrong? My code is below. Thanks!
## Get an AWS Session
self.awsSession = Session(aws_access_key_id=accessKey,
aws_secret_access_key=secretKey,
aws_session_token=session_token,
region_name=region_type)
...
# Upload the file to S3
s3 = self.awsSession.resource('s3')
s3.Bucket('prodbucket').put_object(Key=fileToUpload, Body=data) # WORKS
#s3.Bucket('prodbucket').upload_file(dataFileName, 'prodbucket', fileToUpload) # DOESNT WORK
#s3.upload_file(dataFileName, 'prodbucket', fileToUpload) # DOESNT WORK
The upload_file method has not been ported over to the bucket resource yet. For now you'll need to use the client object directly to do this:
client = self.awsSession.client('s3')
client.upload_file(...)
Libcloud S3 wrapper transparently handles all the splitting and uploading of the parts for you.
Use upload_object_via_stream method to do so:
from libcloud.storage.types import Provider
from libcloud.storage.providers import get_driver
# Path to a very large file you want to upload
FILE_PATH = '/home/user/myfile.tar.gz'
cls = get_driver(Provider.S3)
driver = cls('api key', 'api secret key')
container = driver.get_container(container_name='my-backups-12345')
# This method blocks until all the parts have been uploaded.
extra = {'content_type': 'application/octet-stream'}
with open(FILE_PATH, 'rb') as iterator:
obj = driver.upload_object_via_stream(iterator=iterator,
container=container,
object_name='backup.tar.gz',
extra=extra)
For official documentation on S3 Multipart feature, refer to AWS Official Blog.